Jul 12 00:17:04.887044 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:17:04.887065 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:17:04.887075 kernel: KASLR enabled Jul 12 00:17:04.887081 kernel: efi: EFI v2.7 by EDK II Jul 12 00:17:04.887086 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 12 00:17:04.887092 kernel: random: crng init done Jul 12 00:17:04.887099 kernel: ACPI: Early table checksum verification disabled Jul 12 00:17:04.887105 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 12 00:17:04.887111 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:17:04.887119 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887125 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887131 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887137 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887143 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887151 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887158 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887165 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887183 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:17:04.887190 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:17:04.887196 kernel: NUMA: Failed to initialise from firmware Jul 12 00:17:04.887204 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:17:04.887210 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 12 00:17:04.887216 kernel: Zone ranges: Jul 12 00:17:04.887222 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:17:04.887229 kernel: DMA32 empty Jul 12 00:17:04.887236 kernel: Normal empty Jul 12 00:17:04.887242 kernel: Movable zone start for each node Jul 12 00:17:04.887249 kernel: Early memory node ranges Jul 12 00:17:04.887255 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 12 00:17:04.887261 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:17:04.887269 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:17:04.887275 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:17:04.887281 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:17:04.887288 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:17:04.887294 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:17:04.887300 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:17:04.887307 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:17:04.887314 kernel: psci: probing for conduit method from ACPI. Jul 12 00:17:04.887320 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:17:04.887327 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:17:04.887336 kernel: psci: Trusted OS migration not required Jul 12 00:17:04.887342 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:17:04.887349 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:17:04.887358 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:17:04.887365 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:17:04.887372 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:17:04.887378 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:17:04.887385 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:17:04.887392 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:17:04.887398 kernel: CPU features: detected: Spectre-v4 Jul 12 00:17:04.887405 kernel: CPU features: detected: Spectre-BHB Jul 12 00:17:04.887412 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:17:04.887418 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:17:04.887426 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:17:04.887433 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:17:04.887440 kernel: alternatives: applying boot alternatives Jul 12 00:17:04.887448 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:17:04.887455 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:17:04.887462 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:17:04.887468 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:17:04.887475 kernel: Fallback order for Node 0: 0 Jul 12 00:17:04.887482 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:17:04.887488 kernel: Policy zone: DMA Jul 12 00:17:04.887495 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:17:04.887502 kernel: software IO TLB: area num 4. Jul 12 00:17:04.887588 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:17:04.887599 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 12 00:17:04.887606 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:17:04.887613 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:17:04.887620 kernel: rcu: RCU event tracing is enabled. Jul 12 00:17:04.887627 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:17:04.887634 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:17:04.887641 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:17:04.887647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:17:04.887654 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:17:04.887661 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:17:04.887670 kernel: GICv3: 256 SPIs implemented Jul 12 00:17:04.887677 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:17:04.887683 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:17:04.887690 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:17:04.887697 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:17:04.887703 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:17:04.887710 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:17:04.887717 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:17:04.887724 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:17:04.887730 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:17:04.887737 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:17:04.887745 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:17:04.887752 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:17:04.887759 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:17:04.887765 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:17:04.887772 kernel: arm-pv: using stolen time PV Jul 12 00:17:04.887779 kernel: Console: colour dummy device 80x25 Jul 12 00:17:04.887786 kernel: ACPI: Core revision 20230628 Jul 12 00:17:04.887793 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:17:04.887800 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:17:04.887807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:17:04.887815 kernel: landlock: Up and running. Jul 12 00:17:04.887822 kernel: SELinux: Initializing. Jul 12 00:17:04.887829 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:17:04.887836 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:17:04.887842 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:17:04.887849 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:17:04.887856 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:17:04.887863 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:17:04.887870 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:17:04.887878 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:17:04.887885 kernel: Remapping and enabling EFI services. Jul 12 00:17:04.887892 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:17:04.887898 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:17:04.887905 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:17:04.887912 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:17:04.887919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:17:04.887926 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:17:04.887932 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:17:04.887939 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:17:04.887948 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:17:04.887955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:17:04.887966 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:17:04.887976 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:17:04.887983 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:17:04.887990 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:17:04.887998 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:17:04.888004 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:17:04.888012 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:17:04.888020 kernel: SMP: Total of 4 processors activated. Jul 12 00:17:04.888027 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:17:04.888035 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:17:04.888046 kernel: CPU features: detected: Common not Private translations Jul 12 00:17:04.888053 kernel: CPU features: detected: CRC32 instructions Jul 12 00:17:04.888060 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:17:04.888067 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:17:04.888075 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:17:04.888083 kernel: CPU features: detected: Privileged Access Never Jul 12 00:17:04.888090 kernel: CPU features: detected: RAS Extension Support Jul 12 00:17:04.888098 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:17:04.888105 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:17:04.888112 kernel: alternatives: applying system-wide alternatives Jul 12 00:17:04.888119 kernel: devtmpfs: initialized Jul 12 00:17:04.888127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:17:04.888134 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:17:04.888141 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:17:04.888150 kernel: SMBIOS 3.0.0 present. Jul 12 00:17:04.888157 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 12 00:17:04.888164 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:17:04.888172 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:17:04.888179 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:17:04.888187 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:17:04.888194 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:17:04.888201 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 12 00:17:04.888209 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:17:04.888217 kernel: cpuidle: using governor menu Jul 12 00:17:04.888224 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:17:04.888232 kernel: ASID allocator initialised with 32768 entries Jul 12 00:17:04.888239 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:17:04.888246 kernel: Serial: AMBA PL011 UART driver Jul 12 00:17:04.888253 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:17:04.888260 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:17:04.888267 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:17:04.888274 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:17:04.888283 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:17:04.888290 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:17:04.888297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:17:04.888304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:17:04.888311 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:17:04.888318 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:17:04.888326 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:17:04.888333 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:17:04.888340 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:17:04.888348 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:17:04.888356 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:17:04.888363 kernel: ACPI: Interpreter enabled Jul 12 00:17:04.888370 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:17:04.888377 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:17:04.888384 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:17:04.888391 kernel: printk: console [ttyAMA0] enabled Jul 12 00:17:04.888398 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:17:04.888543 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:17:04.888646 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:17:04.888711 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:17:04.888775 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:17:04.888836 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:17:04.888846 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:17:04.888853 kernel: PCI host bridge to bus 0000:00 Jul 12 00:17:04.888922 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:17:04.888981 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:17:04.889036 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:17:04.889092 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:17:04.889171 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:17:04.889244 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:17:04.889310 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:17:04.889378 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:17:04.889442 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:17:04.889507 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:17:04.889606 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:17:04.889674 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:17:04.889732 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:17:04.889789 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:17:04.889850 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:17:04.889860 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:17:04.889867 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:17:04.889874 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:17:04.889882 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:17:04.889889 kernel: iommu: Default domain type: Translated Jul 12 00:17:04.889896 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:17:04.889903 kernel: efivars: Registered efivars operations Jul 12 00:17:04.889912 kernel: vgaarb: loaded Jul 12 00:17:04.889920 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:17:04.889927 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:17:04.889934 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:17:04.889941 kernel: pnp: PnP ACPI init Jul 12 00:17:04.890015 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:17:04.890025 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:17:04.890033 kernel: NET: Registered PF_INET protocol family Jul 12 00:17:04.890040 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:17:04.890049 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:17:04.890056 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:17:04.890064 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:17:04.890071 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:17:04.890078 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:17:04.890085 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:17:04.890093 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:17:04.890100 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:17:04.890108 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:17:04.890116 kernel: kvm [1]: HYP mode not available Jul 12 00:17:04.890123 kernel: Initialise system trusted keyrings Jul 12 00:17:04.890130 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:17:04.890137 kernel: Key type asymmetric registered Jul 12 00:17:04.890144 kernel: Asymmetric key parser 'x509' registered Jul 12 00:17:04.890151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:17:04.890159 kernel: io scheduler mq-deadline registered Jul 12 00:17:04.890166 kernel: io scheduler kyber registered Jul 12 00:17:04.890173 kernel: io scheduler bfq registered Jul 12 00:17:04.890182 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:17:04.890190 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:17:04.890197 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:17:04.890262 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:17:04.890272 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:17:04.890279 kernel: thunder_xcv, ver 1.0 Jul 12 00:17:04.890286 kernel: thunder_bgx, ver 1.0 Jul 12 00:17:04.890293 kernel: nicpf, ver 1.0 Jul 12 00:17:04.890300 kernel: nicvf, ver 1.0 Jul 12 00:17:04.890374 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:17:04.890435 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:17:04 UTC (1752279424) Jul 12 00:17:04.890445 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:17:04.890452 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:17:04.890460 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:17:04.890467 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:17:04.890475 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:17:04.890482 kernel: Segment Routing with IPv6 Jul 12 00:17:04.890492 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:17:04.890499 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:17:04.890507 kernel: Key type dns_resolver registered Jul 12 00:17:04.890535 kernel: registered taskstats version 1 Jul 12 00:17:04.890543 kernel: Loading compiled-in X.509 certificates Jul 12 00:17:04.890551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:17:04.890558 kernel: Key type .fscrypt registered Jul 12 00:17:04.890572 kernel: Key type fscrypt-provisioning registered Jul 12 00:17:04.890580 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:17:04.890589 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:17:04.890597 kernel: ima: No architecture policies found Jul 12 00:17:04.890604 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:17:04.890611 kernel: clk: Disabling unused clocks Jul 12 00:17:04.890618 kernel: Freeing unused kernel memory: 39424K Jul 12 00:17:04.890625 kernel: Run /init as init process Jul 12 00:17:04.890633 kernel: with arguments: Jul 12 00:17:04.890640 kernel: /init Jul 12 00:17:04.890647 kernel: with environment: Jul 12 00:17:04.890656 kernel: HOME=/ Jul 12 00:17:04.890663 kernel: TERM=linux Jul 12 00:17:04.890670 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:17:04.890679 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:17:04.890688 systemd[1]: Detected virtualization kvm. Jul 12 00:17:04.890696 systemd[1]: Detected architecture arm64. Jul 12 00:17:04.890704 systemd[1]: Running in initrd. Jul 12 00:17:04.890712 systemd[1]: No hostname configured, using default hostname. Jul 12 00:17:04.890720 systemd[1]: Hostname set to . Jul 12 00:17:04.890728 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:17:04.890735 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:17:04.890743 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:17:04.890751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:17:04.890759 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:17:04.890767 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:17:04.890776 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:17:04.890784 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:17:04.890793 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:17:04.890801 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:17:04.890809 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:17:04.890816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:17:04.890824 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:17:04.890833 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:17:04.890841 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:17:04.890848 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:17:04.890856 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:17:04.890864 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:17:04.890872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:17:04.890879 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:17:04.890887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:17:04.890895 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:17:04.890904 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:17:04.890912 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:17:04.890920 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:17:04.890928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:17:04.890935 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:17:04.890943 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:17:04.890951 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:17:04.890959 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:17:04.890968 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:17:04.890975 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:17:04.890983 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:17:04.890991 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:17:04.891000 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:17:04.891009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:17:04.891035 systemd-journald[238]: Collecting audit messages is disabled. Jul 12 00:17:04.891054 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:17:04.891062 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:17:04.891072 systemd-journald[238]: Journal started Jul 12 00:17:04.891091 systemd-journald[238]: Runtime Journal (/run/log/journal/c9a6a2767ab643f69762722effbc484c) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:17:04.882279 systemd-modules-load[239]: Inserted module 'overlay' Jul 12 00:17:04.894148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:17:04.894176 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:17:04.897233 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:17:04.898250 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 12 00:17:04.899049 kernel: Bridge firewalling registered Jul 12 00:17:04.899474 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:17:04.901615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:17:04.903371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:17:04.906500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:04.911633 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:17:04.914558 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:17:04.916373 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:17:04.919602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:04.922392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:17:04.930863 dracut-cmdline[273]: dracut-dracut-053 Jul 12 00:17:04.933237 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:17:04.948021 systemd-resolved[276]: Positive Trust Anchors: Jul 12 00:17:04.948037 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:17:04.948073 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:17:04.952743 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 12 00:17:04.954177 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:17:04.956697 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:17:04.997541 kernel: SCSI subsystem initialized Jul 12 00:17:05.002530 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:17:05.009531 kernel: iscsi: registered transport (tcp) Jul 12 00:17:05.024543 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:17:05.024600 kernel: QLogic iSCSI HBA Driver Jul 12 00:17:05.067266 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:17:05.077684 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:17:05.095504 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:17:05.095611 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:17:05.095623 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:17:05.140539 kernel: raid6: neonx8 gen() 15742 MB/s Jul 12 00:17:05.157543 kernel: raid6: neonx4 gen() 15650 MB/s Jul 12 00:17:05.174541 kernel: raid6: neonx2 gen() 13249 MB/s Jul 12 00:17:05.191540 kernel: raid6: neonx1 gen() 10482 MB/s Jul 12 00:17:05.208540 kernel: raid6: int64x8 gen() 6952 MB/s Jul 12 00:17:05.225536 kernel: raid6: int64x4 gen() 7344 MB/s Jul 12 00:17:05.242528 kernel: raid6: int64x2 gen() 6127 MB/s Jul 12 00:17:05.259544 kernel: raid6: int64x1 gen() 5053 MB/s Jul 12 00:17:05.259583 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Jul 12 00:17:05.276548 kernel: raid6: .... xor() 11915 MB/s, rmw enabled Jul 12 00:17:05.276585 kernel: raid6: using neon recovery algorithm Jul 12 00:17:05.281538 kernel: xor: measuring software checksum speed Jul 12 00:17:05.281558 kernel: 8regs : 19679 MB/sec Jul 12 00:17:05.282567 kernel: 32regs : 19074 MB/sec Jul 12 00:17:05.282580 kernel: arm64_neon : 27052 MB/sec Jul 12 00:17:05.282589 kernel: xor: using function: arm64_neon (27052 MB/sec) Jul 12 00:17:05.334281 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:17:05.347559 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:17:05.358678 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:17:05.370498 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jul 12 00:17:05.373709 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:17:05.392730 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:17:05.405409 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 12 00:17:05.434246 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:17:05.444722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:17:05.485552 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:17:05.492776 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:17:05.506451 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:17:05.509793 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:17:05.511439 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:17:05.513862 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:17:05.527760 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:17:05.536747 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:17:05.536898 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:17:05.536955 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:17:05.546604 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:17:05.546918 kernel: GPT:9289727 != 19775487 Jul 12 00:17:05.546929 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:17:05.546938 kernel: GPT:9289727 != 19775487 Jul 12 00:17:05.546947 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:17:05.547414 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:17:05.548827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:17:05.547544 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:17:05.550607 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:17:05.551704 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:17:05.551869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:17:05.554207 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:17:05.564534 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (509) Jul 12 00:17:05.566369 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (502) Jul 12 00:17:05.565063 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:17:05.575541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:17:05.580077 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:17:05.587868 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:17:05.591884 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:17:05.593127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:17:05.598380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:17:05.607667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:17:05.609573 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:17:05.615406 disk-uuid[549]: Primary Header is updated. Jul 12 00:17:05.615406 disk-uuid[549]: Secondary Entries is updated. Jul 12 00:17:05.615406 disk-uuid[549]: Secondary Header is updated. Jul 12 00:17:05.618535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:17:05.633803 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:17:06.633399 disk-uuid[550]: The operation has completed successfully. Jul 12 00:17:06.634272 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:17:06.666460 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:17:06.666599 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:17:06.691707 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:17:06.694591 sh[572]: Success Jul 12 00:17:06.707530 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:17:06.737406 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:17:06.753929 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:17:06.755983 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:17:06.773036 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:17:06.773090 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:17:06.774537 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:17:06.774577 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:17:06.774590 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:17:06.783065 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:17:06.784196 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:17:06.794681 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:17:06.796008 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:17:06.807931 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:17:06.807977 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:17:06.807987 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:17:06.810754 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:17:06.819220 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:17:06.820541 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:17:06.832110 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:17:06.838687 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:17:06.889079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:17:06.898703 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:17:06.923347 systemd-networkd[757]: lo: Link UP Jul 12 00:17:06.923356 systemd-networkd[757]: lo: Gained carrier Jul 12 00:17:06.924049 systemd-networkd[757]: Enumeration completed Jul 12 00:17:06.924165 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:17:06.924649 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:17:06.924652 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:17:06.925094 systemd[1]: Reached target network.target - Network. Jul 12 00:17:06.925305 systemd-networkd[757]: eth0: Link UP Jul 12 00:17:06.925308 systemd-networkd[757]: eth0: Gained carrier Jul 12 00:17:06.925314 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:17:06.938700 ignition[681]: Ignition 2.19.0 Jul 12 00:17:06.938709 ignition[681]: Stage: fetch-offline Jul 12 00:17:06.938744 ignition[681]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:17:06.938752 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:17:06.940565 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:17:06.938903 ignition[681]: parsed url from cmdline: "" Jul 12 00:17:06.938906 ignition[681]: no config URL provided Jul 12 00:17:06.938910 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:17:06.938917 ignition[681]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:17:06.938939 ignition[681]: op(1): [started] loading QEMU firmware config module Jul 12 00:17:06.938943 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:17:06.954307 ignition[681]: op(1): [finished] loading QEMU firmware config module Jul 12 00:17:06.990681 ignition[681]: parsing config with SHA512: e9f4ae6bb3a92910abae6dd508c42ebfc3cadc6c44e05f77c609e48a1afca01bfb8d496f9f10154225f05bffe528cab38520c7ba550ba9462c53ed78ed37dae4 Jul 12 00:17:06.994808 unknown[681]: fetched base config from "system" Jul 12 00:17:06.994818 unknown[681]: fetched user config from "qemu" Jul 12 00:17:06.997058 ignition[681]: fetch-offline: fetch-offline passed Jul 12 00:17:06.997137 ignition[681]: Ignition finished successfully Jul 12 00:17:07.000078 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:17:07.001147 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:17:07.014752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:17:07.025226 ignition[768]: Ignition 2.19.0 Jul 12 00:17:07.025235 ignition[768]: Stage: kargs Jul 12 00:17:07.025404 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:17:07.025413 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:17:07.026352 ignition[768]: kargs: kargs passed Jul 12 00:17:07.029997 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:17:07.026395 ignition[768]: Ignition finished successfully Jul 12 00:17:07.038714 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:17:07.048707 ignition[778]: Ignition 2.19.0 Jul 12 00:17:07.048718 ignition[778]: Stage: disks Jul 12 00:17:07.048880 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:17:07.048890 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:17:07.049766 ignition[778]: disks: disks passed Jul 12 00:17:07.051259 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:17:07.049811 ignition[778]: Ignition finished successfully Jul 12 00:17:07.052893 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:17:07.054341 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:17:07.056060 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:17:07.057680 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:17:07.059450 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:17:07.075672 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:17:07.085707 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:17:07.090168 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:17:07.092627 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:17:07.142524 kernel: EXT4-fs (vda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:17:07.143044 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:17:07.144150 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:17:07.153635 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:17:07.155336 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:17:07.156890 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:17:07.156944 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:17:07.156968 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:17:07.161923 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:17:07.164091 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Jul 12 00:17:07.164543 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:17:07.167536 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:17:07.167569 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:17:07.168520 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:17:07.170528 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:17:07.171988 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:17:07.220569 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:17:07.224737 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:17:07.228242 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:17:07.231816 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:17:07.308431 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:17:07.318681 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:17:07.320091 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:17:07.325534 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:17:07.343656 ignition[911]: INFO : Ignition 2.19.0 Jul 12 00:17:07.343656 ignition[911]: INFO : Stage: mount Jul 12 00:17:07.345370 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:17:07.345370 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:17:07.345370 ignition[911]: INFO : mount: mount passed Jul 12 00:17:07.345370 ignition[911]: INFO : Ignition finished successfully Jul 12 00:17:07.348436 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:17:07.350264 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:17:07.360667 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:17:07.772592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:17:07.786749 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:17:07.792996 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (926) Jul 12 00:17:07.793042 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:17:07.793054 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:17:07.794527 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:17:07.796540 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:17:07.797343 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:17:07.814458 ignition[943]: INFO : Ignition 2.19.0 Jul 12 00:17:07.816376 ignition[943]: INFO : Stage: files Jul 12 00:17:07.816376 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:17:07.816376 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:17:07.819325 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:17:07.819325 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:17:07.819325 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:17:07.822358 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:17:07.822358 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:17:07.822358 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:17:07.822358 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:17:07.822358 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 12 00:17:07.820634 unknown[943]: wrote ssh authorized keys file for user: core Jul 12 00:17:07.883890 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:17:08.050003 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 12 00:17:08.050003 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:17:08.052787 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:17:08.158740 systemd-networkd[757]: eth0: Gained IPv6LL Jul 12 00:17:08.364947 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:17:08.468574 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:17:08.481470 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:17:08.481470 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:17:08.481470 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:17:08.481470 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:17:08.481470 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 12 00:17:08.780741 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:17:09.532844 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 12 00:17:09.532844 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:17:09.535954 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:17:09.564717 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:17:09.569180 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:17:09.569180 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:17:09.569180 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:17:09.569180 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:17:09.575749 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:17:09.575749 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:17:09.575749 ignition[943]: INFO : files: files passed Jul 12 00:17:09.575749 ignition[943]: INFO : Ignition finished successfully Jul 12 00:17:09.571957 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:17:09.583687 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:17:09.585241 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:17:09.588471 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:17:09.589346 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:17:09.593612 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:17:09.596763 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:17:09.596763 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:17:09.598869 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:17:09.598565 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:17:09.600070 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:17:09.607752 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:17:09.635686 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:17:09.636536 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:17:09.637653 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:17:09.639137 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:17:09.640464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:17:09.641385 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:17:09.657388 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:17:09.660290 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:17:09.674829 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:17:09.675784 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:17:09.677454 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:17:09.678996 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:17:09.679122 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:17:09.681415 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:17:09.683179 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:17:09.684589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:17:09.686043 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:17:09.687721 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:17:09.689536 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:17:09.691149 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:17:09.693136 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:17:09.694729 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:17:09.696501 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:17:09.697940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:17:09.698117 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:17:09.700526 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:17:09.702819 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:17:09.704828 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:17:09.704913 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:17:09.706910 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:17:09.707036 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:17:09.709610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:17:09.709732 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:17:09.711459 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:17:09.712805 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:17:09.718615 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:17:09.719628 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:17:09.721414 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:17:09.722753 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:17:09.722851 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:17:09.724205 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:17:09.724285 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:17:09.725575 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:17:09.725691 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:17:09.727148 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:17:09.727245 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:17:09.740832 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:17:09.742254 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:17:09.742972 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:17:09.743089 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:17:09.744709 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:17:09.744813 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:17:09.750984 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:17:09.751928 ignition[997]: INFO : Ignition 2.19.0 Jul 12 00:17:09.751928 ignition[997]: INFO : Stage: umount Jul 12 00:17:09.751928 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:17:09.751928 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:17:09.759005 ignition[997]: INFO : umount: umount passed Jul 12 00:17:09.759005 ignition[997]: INFO : Ignition finished successfully Jul 12 00:17:09.752727 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:17:09.755114 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:17:09.755198 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:17:09.756853 systemd[1]: Stopped target network.target - Network. Jul 12 00:17:09.757856 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:17:09.757932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:17:09.760756 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:17:09.760839 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:17:09.762051 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:17:09.762094 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:17:09.763948 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:17:09.763992 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:17:09.766053 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:17:09.770750 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:17:09.773717 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:17:09.774343 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:17:09.774430 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:17:09.775834 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:17:09.775917 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:17:09.779570 systemd-networkd[757]: eth0: DHCPv6 lease lost Jul 12 00:17:09.781771 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:17:09.781902 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:17:09.783255 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:17:09.783354 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:17:09.786874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:17:09.786927 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:17:09.801672 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:17:09.802325 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:17:09.802387 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:17:09.804322 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:17:09.804366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:09.806099 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:17:09.806144 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:17:09.808087 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:17:09.808132 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:17:09.809769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:17:09.822328 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:17:09.822440 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:17:09.825248 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:17:09.825405 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:17:09.827325 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:17:09.827363 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:17:09.829452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:17:09.829500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:17:09.831517 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:17:09.831592 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:17:09.833986 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:17:09.834030 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:17:09.836376 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:17:09.836415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:17:09.850703 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:17:09.851744 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:17:09.851803 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:17:09.853328 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 12 00:17:09.853366 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:17:09.854907 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:17:09.854960 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:17:09.856470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:17:09.856507 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:17:09.858313 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:17:09.858414 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:17:09.861121 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:17:09.862802 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:17:09.873316 systemd[1]: Switching root. Jul 12 00:17:09.900684 systemd-journald[238]: Journal stopped Jul 12 00:17:10.644971 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 12 00:17:10.645031 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:17:10.645044 kernel: SELinux: policy capability open_perms=1 Jul 12 00:17:10.645057 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:17:10.645066 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:17:10.645076 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:17:10.645085 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:17:10.645101 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:17:10.645111 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:17:10.645121 kernel: audit: type=1403 audit(1752279430.084:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:17:10.645132 systemd[1]: Successfully loaded SELinux policy in 31.354ms. Jul 12 00:17:10.645145 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.413ms. Jul 12 00:17:10.645156 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:17:10.645167 systemd[1]: Detected virtualization kvm. Jul 12 00:17:10.645180 systemd[1]: Detected architecture arm64. Jul 12 00:17:10.645190 systemd[1]: Detected first boot. Jul 12 00:17:10.645202 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:17:10.647055 zram_generator::config[1045]: No configuration found. Jul 12 00:17:10.647104 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:17:10.647118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:17:10.647129 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:17:10.647140 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:17:10.647151 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:17:10.647162 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:17:10.647178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:17:10.647188 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:17:10.647199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:17:10.647210 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:17:10.647221 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:17:10.647233 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:17:10.647243 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:17:10.647254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:17:10.647265 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:17:10.647277 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:17:10.647288 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:17:10.647299 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:17:10.647309 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:17:10.647319 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:17:10.647330 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:17:10.647344 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:17:10.647355 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:17:10.647366 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:17:10.647377 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:17:10.647388 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:17:10.647398 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:17:10.647409 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:17:10.647419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:17:10.647430 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:17:10.647440 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:17:10.647451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:17:10.647463 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:17:10.647473 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:17:10.647484 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:17:10.647495 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:17:10.647505 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:17:10.647559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:17:10.647572 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:17:10.647582 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:17:10.647597 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:17:10.647607 systemd[1]: Reached target machines.target - Containers. Jul 12 00:17:10.647617 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:17:10.647628 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:17:10.647639 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:17:10.647649 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:17:10.647660 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:17:10.647671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:17:10.647681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:17:10.647694 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:17:10.647704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:17:10.647715 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:17:10.647726 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:17:10.647736 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:17:10.647747 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:17:10.647757 kernel: fuse: init (API version 7.39) Jul 12 00:17:10.647767 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:17:10.647796 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:17:10.647806 kernel: loop: module loaded Jul 12 00:17:10.647817 kernel: ACPI: bus type drm_connector registered Jul 12 00:17:10.647827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:17:10.647837 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:17:10.647848 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:17:10.647858 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:17:10.647869 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:17:10.647880 systemd[1]: Stopped verity-setup.service. Jul 12 00:17:10.647919 systemd-journald[1111]: Collecting audit messages is disabled. Jul 12 00:17:10.647943 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:17:10.647954 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:17:10.647965 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:17:10.647978 systemd-journald[1111]: Journal started Jul 12 00:17:10.648000 systemd-journald[1111]: Runtime Journal (/run/log/journal/c9a6a2767ab643f69762722effbc484c) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:17:10.463078 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:17:10.480505 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:17:10.480911 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:17:10.650023 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:17:10.650731 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:17:10.651767 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:17:10.652857 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:17:10.653904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:17:10.655079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:17:10.655238 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:17:10.656450 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:17:10.656615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:17:10.657730 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:17:10.657954 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:17:10.659013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:17:10.659153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:17:10.660419 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:17:10.660596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:17:10.661664 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:17:10.661799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:17:10.663055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:17:10.664214 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:17:10.665429 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:17:10.666875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:17:10.679668 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:17:10.690662 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:17:10.692982 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:17:10.694121 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:17:10.694164 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:17:10.696226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:17:10.698562 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:17:10.700842 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:17:10.701984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:17:10.703506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:17:10.705586 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:17:10.706823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:17:10.710781 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:17:10.714077 systemd-journald[1111]: Time spent on flushing to /var/log/journal/c9a6a2767ab643f69762722effbc484c is 22.417ms for 855 entries. Jul 12 00:17:10.714077 systemd-journald[1111]: System Journal (/var/log/journal/c9a6a2767ab643f69762722effbc484c) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:17:10.764597 systemd-journald[1111]: Received client request to flush runtime journal. Jul 12 00:17:10.764655 kernel: loop0: detected capacity change from 0 to 203944 Jul 12 00:17:10.764668 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:17:10.711893 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:17:10.713839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:10.719769 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:17:10.722254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:17:10.724924 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:17:10.726299 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:17:10.727588 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:17:10.729218 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:17:10.745907 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:17:10.750643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:17:10.752534 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:17:10.764746 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:17:10.766620 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:17:10.768348 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:10.775035 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 12 00:17:10.775052 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 12 00:17:10.781697 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 12 00:17:10.785718 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:17:10.795609 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:17:10.795938 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:17:10.810558 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:17:10.814377 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:17:10.825034 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:17:10.833531 kernel: loop2: detected capacity change from 0 to 114328 Jul 12 00:17:10.833776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:17:10.850053 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jul 12 00:17:10.850073 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jul 12 00:17:10.854310 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:17:10.869565 kernel: loop3: detected capacity change from 0 to 203944 Jul 12 00:17:10.878527 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:17:10.886538 kernel: loop5: detected capacity change from 0 to 114328 Jul 12 00:17:10.895379 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:17:10.895777 (sd-merge)[1184]: Merged extensions into '/usr'. Jul 12 00:17:10.899274 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:17:10.899290 systemd[1]: Reloading... Jul 12 00:17:10.961561 zram_generator::config[1213]: No configuration found. Jul 12 00:17:11.044609 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:17:11.050828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:11.087600 systemd[1]: Reloading finished in 187 ms. Jul 12 00:17:11.111566 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:17:11.114022 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:17:11.131765 systemd[1]: Starting ensure-sysext.service... Jul 12 00:17:11.133667 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:17:11.140556 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:17:11.140573 systemd[1]: Reloading... Jul 12 00:17:11.150440 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:17:11.151032 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:17:11.151767 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:17:11.152071 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 12 00:17:11.152193 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jul 12 00:17:11.154576 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:17:11.154673 systemd-tmpfiles[1246]: Skipping /boot Jul 12 00:17:11.161485 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:17:11.161786 systemd-tmpfiles[1246]: Skipping /boot Jul 12 00:17:11.195554 zram_generator::config[1273]: No configuration found. Jul 12 00:17:11.280778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:11.316792 systemd[1]: Reloading finished in 175 ms. Jul 12 00:17:11.330629 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:17:11.338089 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:17:11.346233 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:17:11.348935 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:17:11.351368 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:17:11.354945 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:17:11.358892 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:17:11.363858 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:17:11.367309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:17:11.368831 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:17:11.372893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:17:11.375954 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:17:11.377128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:17:11.385322 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:17:11.386935 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:17:11.389238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:17:11.389428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:17:11.393960 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:17:11.394180 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:17:11.396041 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:17:11.396191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:17:11.404124 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:17:11.413397 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:17:11.416711 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jul 12 00:17:11.417758 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:17:11.424685 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:17:11.425891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:17:11.431444 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:17:11.433699 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:17:11.435599 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:17:11.437562 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:17:11.437710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:17:11.439410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:17:11.439596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:17:11.441447 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:17:11.441602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:17:11.446040 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:17:11.447692 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:17:11.449437 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:17:11.460604 systemd[1]: Finished ensure-sysext.service. Jul 12 00:17:11.460726 augenrules[1355]: No rules Jul 12 00:17:11.461820 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:17:11.465314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:17:11.475774 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:17:11.482657 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:17:11.487425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:17:11.494015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:17:11.496833 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:17:11.501490 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:17:11.501784 systemd-resolved[1313]: Positive Trust Anchors: Jul 12 00:17:11.501802 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:17:11.501833 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:17:11.507699 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) Jul 12 00:17:11.507722 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:17:11.509146 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:17:11.509742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:17:11.509922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:17:11.511397 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:17:11.511409 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jul 12 00:17:11.513563 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:17:11.515376 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:17:11.518788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:17:11.518923 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:17:11.520958 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:17:11.521087 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:17:11.542165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:17:11.544102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:17:11.554706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:17:11.555945 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:17:11.556013 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:17:11.556277 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:17:11.574599 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:17:11.576747 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:17:11.578672 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:17:11.597268 systemd-networkd[1385]: lo: Link UP Jul 12 00:17:11.597275 systemd-networkd[1385]: lo: Gained carrier Jul 12 00:17:11.598966 systemd-networkd[1385]: Enumeration completed Jul 12 00:17:11.599125 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:17:11.599441 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:17:11.599445 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:17:11.600189 systemd-networkd[1385]: eth0: Link UP Jul 12 00:17:11.600193 systemd-networkd[1385]: eth0: Gained carrier Jul 12 00:17:11.600206 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:17:11.601060 systemd[1]: Reached target network.target - Network. Jul 12 00:17:11.610791 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:17:11.612631 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.83/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:17:11.619054 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jul 12 00:17:11.619694 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:17:11.619742 systemd-timesyncd[1387]: Initial clock synchronization to Sat 2025-07-12 00:17:11.629574 UTC. Jul 12 00:17:11.621467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:17:11.632903 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:17:11.636268 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:17:11.658535 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:17:11.674585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:17:11.685648 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:17:11.687598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:17:11.688855 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:17:11.690140 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:17:11.691400 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:17:11.692821 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:17:11.693968 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:17:11.698553 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:17:11.707805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:17:11.707844 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:17:11.708723 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:17:11.710180 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:17:11.712684 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:17:11.723601 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:17:11.725960 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:17:11.727604 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:17:11.728761 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:17:11.729704 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:17:11.730631 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:17:11.730658 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:17:11.731739 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:17:11.734728 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:17:11.735221 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:17:11.738678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:17:11.741052 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:17:11.743202 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:17:11.749385 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:17:11.751602 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:17:11.754035 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:17:11.759159 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:17:11.760278 jq[1417]: false Jul 12 00:17:11.764847 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:17:11.767240 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:17:11.767770 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:17:11.770721 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:17:11.775643 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:17:11.777436 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:17:11.779829 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:17:11.780000 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:17:11.782266 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:17:11.782713 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:17:11.783775 dbus-daemon[1416]: [system] SELinux support is enabled Jul 12 00:17:11.784749 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:17:11.788038 extend-filesystems[1418]: Found loop3 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found loop4 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found loop5 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda1 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda2 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda3 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found usr Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda4 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda6 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda7 Jul 12 00:17:11.789579 extend-filesystems[1418]: Found vda9 Jul 12 00:17:11.789579 extend-filesystems[1418]: Checking size of /dev/vda9 Jul 12 00:17:11.789812 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:17:11.789982 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:17:11.811173 jq[1432]: true Jul 12 00:17:11.811784 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:17:11.811837 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:17:11.813167 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:17:11.813184 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:17:11.818469 jq[1439]: true Jul 12 00:17:11.825732 extend-filesystems[1418]: Resized partition /dev/vda9 Jul 12 00:17:11.826483 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:17:11.827766 tar[1436]: linux-arm64/helm Jul 12 00:17:11.832675 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:17:11.843085 update_engine[1427]: I20250712 00:17:11.842748 1427 main.cc:92] Flatcar Update Engine starting Jul 12 00:17:11.847706 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1360) Jul 12 00:17:11.847748 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:17:11.847784 update_engine[1427]: I20250712 00:17:11.847357 1427 update_check_scheduler.cc:74] Next update check in 9m17s Jul 12 00:17:11.848724 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:17:11.852187 systemd-logind[1423]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:17:11.853834 systemd-logind[1423]: New seat seat0. Jul 12 00:17:11.855683 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:17:11.858929 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:17:11.890003 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:17:11.925758 extend-filesystems[1454]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:17:11.925758 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:17:11.925758 extend-filesystems[1454]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:17:11.932957 extend-filesystems[1418]: Resized filesystem in /dev/vda9 Jul 12 00:17:11.927745 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:17:11.927938 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:17:11.937378 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:17:11.938376 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:17:11.944151 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:17:11.952687 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:17:12.070542 containerd[1438]: time="2025-07-12T00:17:12.070042868Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:17:12.100934 containerd[1438]: time="2025-07-12T00:17:12.100816739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102324 containerd[1438]: time="2025-07-12T00:17:12.102206492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102324 containerd[1438]: time="2025-07-12T00:17:12.102304642Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:17:12.102324 containerd[1438]: time="2025-07-12T00:17:12.102322888Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:17:12.102526 containerd[1438]: time="2025-07-12T00:17:12.102490500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:17:12.102555 containerd[1438]: time="2025-07-12T00:17:12.102529912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102618 containerd[1438]: time="2025-07-12T00:17:12.102600695Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102646 containerd[1438]: time="2025-07-12T00:17:12.102618060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102946 containerd[1438]: time="2025-07-12T00:17:12.102914912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102946 containerd[1438]: time="2025-07-12T00:17:12.102941201Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.102985 containerd[1438]: time="2025-07-12T00:17:12.102955765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:17:12.103029 containerd[1438]: time="2025-07-12T00:17:12.102965448Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.103121 containerd[1438]: time="2025-07-12T00:17:12.103107693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.103435 containerd[1438]: time="2025-07-12T00:17:12.103409387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:17:12.103582 containerd[1438]: time="2025-07-12T00:17:12.103552271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:17:12.103606 containerd[1438]: time="2025-07-12T00:17:12.103584841Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:17:12.103695 containerd[1438]: time="2025-07-12T00:17:12.103679991Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:17:12.103735 containerd[1438]: time="2025-07-12T00:17:12.103724725Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:17:12.108410 containerd[1438]: time="2025-07-12T00:17:12.108373093Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:17:12.108458 containerd[1438]: time="2025-07-12T00:17:12.108429831Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:17:12.108458 containerd[1438]: time="2025-07-12T00:17:12.108453238Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:17:12.108530 containerd[1438]: time="2025-07-12T00:17:12.108468483Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:17:12.108530 containerd[1438]: time="2025-07-12T00:17:12.108522100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:17:12.108853 containerd[1438]: time="2025-07-12T00:17:12.108819793Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:17:12.109173 containerd[1438]: time="2025-07-12T00:17:12.109153737Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:17:12.109290 containerd[1438]: time="2025-07-12T00:17:12.109272974Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:17:12.109326 containerd[1438]: time="2025-07-12T00:17:12.109293660Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:17:12.109326 containerd[1438]: time="2025-07-12T00:17:12.109308785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:17:12.109326 containerd[1438]: time="2025-07-12T00:17:12.109321549Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109383 containerd[1438]: time="2025-07-12T00:17:12.109335953Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109383 containerd[1438]: time="2025-07-12T00:17:12.109348277Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109383 containerd[1438]: time="2025-07-12T00:17:12.109361681Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109383 containerd[1438]: time="2025-07-12T00:17:12.109376566Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109450 containerd[1438]: time="2025-07-12T00:17:12.109388690Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109450 containerd[1438]: time="2025-07-12T00:17:12.109401054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109450 containerd[1438]: time="2025-07-12T00:17:12.109412937Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:17:12.109450 containerd[1438]: time="2025-07-12T00:17:12.109431543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109450 containerd[1438]: time="2025-07-12T00:17:12.109445147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109458192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109471516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109484160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109496884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109508207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109558823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109572907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109593 containerd[1438]: time="2025-07-12T00:17:12.109593354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109747 containerd[1438]: time="2025-07-12T00:17:12.109607558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109747 containerd[1438]: time="2025-07-12T00:17:12.109620082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109747 containerd[1438]: time="2025-07-12T00:17:12.109635767Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:17:12.109747 containerd[1438]: time="2025-07-12T00:17:12.109662055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109747 containerd[1438]: time="2025-07-12T00:17:12.109679621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109747 containerd[1438]: time="2025-07-12T00:17:12.109690944Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:17:12.109852 containerd[1438]: time="2025-07-12T00:17:12.109798418Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:17:12.109852 containerd[1438]: time="2025-07-12T00:17:12.109817784Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:17:12.109852 containerd[1438]: time="2025-07-12T00:17:12.109828227Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:17:12.109852 containerd[1438]: time="2025-07-12T00:17:12.109841191Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:17:12.109852 containerd[1438]: time="2025-07-12T00:17:12.109851834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.109941 containerd[1438]: time="2025-07-12T00:17:12.109868439Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:17:12.109941 containerd[1438]: time="2025-07-12T00:17:12.109878843Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:17:12.109941 containerd[1438]: time="2025-07-12T00:17:12.109888526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.111662318Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.111988780Z" level=info msg="Connect containerd service" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.112028232Z" level=info msg="using legacy CRI server" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.112036835Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.112110538Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.112748577Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.113023983Z" level=info msg="Start subscribing containerd event" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.113072838Z" level=info msg="Start recovering state" Jul 12 00:17:12.113259 containerd[1438]: time="2025-07-12T00:17:12.113251454Z" level=info msg="Start event monitor" Jul 12 00:17:12.113560 containerd[1438]: time="2025-07-12T00:17:12.113267499Z" level=info msg="Start snapshots syncer" Jul 12 00:17:12.113560 containerd[1438]: time="2025-07-12T00:17:12.113338681Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:17:12.113560 containerd[1438]: time="2025-07-12T00:17:12.113347484Z" level=info msg="Start streaming server" Jul 12 00:17:12.113686 containerd[1438]: time="2025-07-12T00:17:12.113664502Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:17:12.113801 containerd[1438]: time="2025-07-12T00:17:12.113787701Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:17:12.113936 containerd[1438]: time="2025-07-12T00:17:12.113922943Z" level=info msg="containerd successfully booted in 0.045233s" Jul 12 00:17:12.114024 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:17:12.227709 tar[1436]: linux-arm64/LICENSE Jul 12 00:17:12.227803 tar[1436]: linux-arm64/README.md Jul 12 00:17:12.239907 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:17:12.289970 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:17:12.309135 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:17:12.320775 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:17:12.327265 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:17:12.327469 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:17:12.329804 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:17:12.344099 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:17:12.346866 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:17:12.348653 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:17:12.349615 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:17:13.534681 systemd-networkd[1385]: eth0: Gained IPv6LL Jul 12 00:17:13.537158 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:17:13.538810 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:17:13.549733 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:17:13.552998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:13.554812 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:17:13.568584 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:17:13.569676 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:17:13.570901 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:17:13.575736 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:17:14.102563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:14.103779 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:17:14.106338 systemd[1]: Startup finished in 570ms (kernel) + 5.383s (initrd) + 4.056s (userspace) = 10.010s. Jul 12 00:17:14.106995 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:17:14.542592 kubelet[1529]: E0712 00:17:14.542450 1529 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:17:14.545316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:17:14.545458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:17:17.330377 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:17:17.331796 systemd[1]: Started sshd@0-10.0.0.83:22-10.0.0.1:41542.service - OpenSSH per-connection server daemon (10.0.0.1:41542). Jul 12 00:17:17.378614 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 41542 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:17.382246 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:17.399183 systemd-logind[1423]: New session 1 of user core. Jul 12 00:17:17.400157 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:17:17.416282 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:17:17.426948 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:17:17.428962 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:17:17.434769 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:17:17.519215 systemd[1546]: Queued start job for default target default.target. Jul 12 00:17:17.529347 systemd[1546]: Created slice app.slice - User Application Slice. Jul 12 00:17:17.529374 systemd[1546]: Reached target paths.target - Paths. Jul 12 00:17:17.529386 systemd[1546]: Reached target timers.target - Timers. Jul 12 00:17:17.530493 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:17:17.540416 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:17:17.540471 systemd[1546]: Reached target sockets.target - Sockets. Jul 12 00:17:17.540483 systemd[1546]: Reached target basic.target - Basic System. Jul 12 00:17:17.540570 systemd[1546]: Reached target default.target - Main User Target. Jul 12 00:17:17.540603 systemd[1546]: Startup finished in 101ms. Jul 12 00:17:17.540796 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:17:17.542019 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:17:17.602583 systemd[1]: Started sshd@1-10.0.0.83:22-10.0.0.1:41548.service - OpenSSH per-connection server daemon (10.0.0.1:41548). Jul 12 00:17:17.637718 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 41548 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:17.639027 sshd[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:17.644170 systemd-logind[1423]: New session 2 of user core. Jul 12 00:17:17.655703 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:17:17.708993 sshd[1557]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:17.718795 systemd[1]: sshd@1-10.0.0.83:22-10.0.0.1:41548.service: Deactivated successfully. Jul 12 00:17:17.720319 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:17:17.721620 systemd-logind[1423]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:17:17.722762 systemd[1]: Started sshd@2-10.0.0.83:22-10.0.0.1:41562.service - OpenSSH per-connection server daemon (10.0.0.1:41562). Jul 12 00:17:17.724871 systemd-logind[1423]: Removed session 2. Jul 12 00:17:17.756309 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 41562 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:17.757618 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:17.761086 systemd-logind[1423]: New session 3 of user core. Jul 12 00:17:17.777656 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:17:17.826549 sshd[1564]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:17.836919 systemd[1]: sshd@2-10.0.0.83:22-10.0.0.1:41562.service: Deactivated successfully. Jul 12 00:17:17.838576 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:17:17.841150 systemd-logind[1423]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:17:17.857821 systemd[1]: Started sshd@3-10.0.0.83:22-10.0.0.1:41576.service - OpenSSH per-connection server daemon (10.0.0.1:41576). Jul 12 00:17:17.861855 systemd-logind[1423]: Removed session 3. Jul 12 00:17:17.895606 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 41576 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:17.896868 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:17.901469 systemd-logind[1423]: New session 4 of user core. Jul 12 00:17:17.912660 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:17:17.966939 sshd[1571]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:17.980045 systemd[1]: sshd@3-10.0.0.83:22-10.0.0.1:41576.service: Deactivated successfully. Jul 12 00:17:17.981334 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:17:17.984498 systemd-logind[1423]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:17:17.985614 systemd[1]: Started sshd@4-10.0.0.83:22-10.0.0.1:41584.service - OpenSSH per-connection server daemon (10.0.0.1:41584). Jul 12 00:17:17.987881 systemd-logind[1423]: Removed session 4. Jul 12 00:17:18.021333 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 41584 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:18.025869 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:18.033118 systemd-logind[1423]: New session 5 of user core. Jul 12 00:17:18.046718 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:17:18.125468 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:17:18.126082 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:18.140461 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:18.142840 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:18.156093 systemd[1]: sshd@4-10.0.0.83:22-10.0.0.1:41584.service: Deactivated successfully. Jul 12 00:17:18.158865 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:17:18.160796 systemd-logind[1423]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:17:18.183869 systemd[1]: Started sshd@5-10.0.0.83:22-10.0.0.1:41592.service - OpenSSH per-connection server daemon (10.0.0.1:41592). Jul 12 00:17:18.184636 systemd-logind[1423]: Removed session 5. Jul 12 00:17:18.216540 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 41592 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:18.218113 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:18.222985 systemd-logind[1423]: New session 6 of user core. Jul 12 00:17:18.231790 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:17:18.283992 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:17:18.284605 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:18.287575 sudo[1590]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:18.291880 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:17:18.292127 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:18.310764 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:17:18.312277 auditctl[1593]: No rules Jul 12 00:17:18.313426 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:17:18.315560 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:17:18.317504 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:17:18.343953 augenrules[1611]: No rules Jul 12 00:17:18.345569 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:17:18.346723 sudo[1589]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:18.350562 sshd[1586]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:18.357960 systemd[1]: sshd@5-10.0.0.83:22-10.0.0.1:41592.service: Deactivated successfully. Jul 12 00:17:18.359455 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:17:18.361540 systemd-logind[1423]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:17:18.368877 systemd[1]: Started sshd@6-10.0.0.83:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Jul 12 00:17:18.369722 systemd-logind[1423]: Removed session 6. Jul 12 00:17:18.400502 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:17:18.400985 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:17:18.405175 systemd-logind[1423]: New session 7 of user core. Jul 12 00:17:18.415687 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:17:18.468757 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:17:18.469271 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:17:18.874830 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:17:18.874901 (dockerd)[1641]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:17:19.159654 dockerd[1641]: time="2025-07-12T00:17:19.159489588Z" level=info msg="Starting up" Jul 12 00:17:19.303298 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport453360534-merged.mount: Deactivated successfully. Jul 12 00:17:19.323520 dockerd[1641]: time="2025-07-12T00:17:19.323441502Z" level=info msg="Loading containers: start." Jul 12 00:17:19.431562 kernel: Initializing XFRM netlink socket Jul 12 00:17:19.511762 systemd-networkd[1385]: docker0: Link UP Jul 12 00:17:19.530973 dockerd[1641]: time="2025-07-12T00:17:19.530912236Z" level=info msg="Loading containers: done." Jul 12 00:17:19.546014 dockerd[1641]: time="2025-07-12T00:17:19.545951069Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:17:19.546164 dockerd[1641]: time="2025-07-12T00:17:19.546080461Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:17:19.546262 dockerd[1641]: time="2025-07-12T00:17:19.546226817Z" level=info msg="Daemon has completed initialization" Jul 12 00:17:19.582539 dockerd[1641]: time="2025-07-12T00:17:19.582397964Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:17:19.582811 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:17:20.215279 containerd[1438]: time="2025-07-12T00:17:20.215233653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 12 00:17:20.780448 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1256466306.mount: Deactivated successfully. Jul 12 00:17:21.659092 containerd[1438]: time="2025-07-12T00:17:21.659034561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:21.659583 containerd[1438]: time="2025-07-12T00:17:21.659545481Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 12 00:17:21.660562 containerd[1438]: time="2025-07-12T00:17:21.660487622Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:21.663373 containerd[1438]: time="2025-07-12T00:17:21.663319645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:21.664601 containerd[1438]: time="2025-07-12T00:17:21.664563016Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.449282792s" Jul 12 00:17:21.664659 containerd[1438]: time="2025-07-12T00:17:21.664605226Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 12 00:17:21.667683 containerd[1438]: time="2025-07-12T00:17:21.667636576Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 12 00:17:22.673719 containerd[1438]: time="2025-07-12T00:17:22.673652203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:22.674485 containerd[1438]: time="2025-07-12T00:17:22.674448543Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 12 00:17:22.675441 containerd[1438]: time="2025-07-12T00:17:22.675386436Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:22.679023 containerd[1438]: time="2025-07-12T00:17:22.678977051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:22.679774 containerd[1438]: time="2025-07-12T00:17:22.679732702Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.011961455s" Jul 12 00:17:22.679774 containerd[1438]: time="2025-07-12T00:17:22.679770551Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 12 00:17:22.680577 containerd[1438]: time="2025-07-12T00:17:22.680425339Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 12 00:17:23.617086 containerd[1438]: time="2025-07-12T00:17:23.617011003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:23.629385 containerd[1438]: time="2025-07-12T00:17:23.629337953Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 12 00:17:23.642310 containerd[1438]: time="2025-07-12T00:17:23.642266794Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:23.657487 containerd[1438]: time="2025-07-12T00:17:23.657440689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:23.658668 containerd[1438]: time="2025-07-12T00:17:23.658627390Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 978.161522ms" Jul 12 00:17:23.658718 containerd[1438]: time="2025-07-12T00:17:23.658668079Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 12 00:17:23.659217 containerd[1438]: time="2025-07-12T00:17:23.659180592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 12 00:17:24.567116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375197017.mount: Deactivated successfully. Jul 12 00:17:24.568064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:17:24.580748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:24.685771 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:24.691550 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:17:24.739475 kubelet[1866]: E0712 00:17:24.739415 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:17:24.743602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:17:24.743747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:17:25.037326 containerd[1438]: time="2025-07-12T00:17:25.037204182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:25.037919 containerd[1438]: time="2025-07-12T00:17:25.037681281Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 12 00:17:25.038976 containerd[1438]: time="2025-07-12T00:17:25.038905173Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:25.041479 containerd[1438]: time="2025-07-12T00:17:25.041413931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:25.042219 containerd[1438]: time="2025-07-12T00:17:25.042083709Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.38286807s" Jul 12 00:17:25.042219 containerd[1438]: time="2025-07-12T00:17:25.042119196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 12 00:17:25.042713 containerd[1438]: time="2025-07-12T00:17:25.042676831Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:17:25.569386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59935930.mount: Deactivated successfully. Jul 12 00:17:26.244101 containerd[1438]: time="2025-07-12T00:17:26.243932555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:26.245215 containerd[1438]: time="2025-07-12T00:17:26.245186445Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:17:26.245933 containerd[1438]: time="2025-07-12T00:17:26.245900548Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:26.249267 containerd[1438]: time="2025-07-12T00:17:26.249196807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:26.250666 containerd[1438]: time="2025-07-12T00:17:26.250627652Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.207915974s" Jul 12 00:17:26.250733 containerd[1438]: time="2025-07-12T00:17:26.250664780Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:17:26.251332 containerd[1438]: time="2025-07-12T00:17:26.251107348Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:17:26.681245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3805183412.mount: Deactivated successfully. Jul 12 00:17:26.689949 containerd[1438]: time="2025-07-12T00:17:26.689905073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:26.690725 containerd[1438]: time="2025-07-12T00:17:26.690656583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:17:26.691479 containerd[1438]: time="2025-07-12T00:17:26.691309954Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:26.694690 containerd[1438]: time="2025-07-12T00:17:26.694038499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:26.694690 containerd[1438]: time="2025-07-12T00:17:26.694581727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 443.434051ms" Jul 12 00:17:26.694690 containerd[1438]: time="2025-07-12T00:17:26.694606212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:17:26.695313 containerd[1438]: time="2025-07-12T00:17:26.695232337Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 12 00:17:27.226956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560978900.mount: Deactivated successfully. Jul 12 00:17:28.533167 containerd[1438]: time="2025-07-12T00:17:28.533106479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:28.534199 containerd[1438]: time="2025-07-12T00:17:28.534149394Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 12 00:17:28.536625 containerd[1438]: time="2025-07-12T00:17:28.536586291Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:28.540353 containerd[1438]: time="2025-07-12T00:17:28.540289506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:28.541621 containerd[1438]: time="2025-07-12T00:17:28.541575587Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.846168855s" Jul 12 00:17:28.541621 containerd[1438]: time="2025-07-12T00:17:28.541620195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 12 00:17:32.105195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:32.122811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:32.144027 systemd[1]: Reloading requested from client PID 2015 ('systemctl') (unit session-7.scope)... Jul 12 00:17:32.144044 systemd[1]: Reloading... Jul 12 00:17:32.215555 zram_generator::config[2054]: No configuration found. Jul 12 00:17:32.342885 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:32.397827 systemd[1]: Reloading finished in 253 ms. Jul 12 00:17:32.443275 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 00:17:32.443338 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 00:17:32.444586 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:32.454980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:32.563978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:32.568708 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:17:32.605810 kubelet[2101]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:32.605810 kubelet[2101]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:17:32.605810 kubelet[2101]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:32.606160 kubelet[2101]: I0712 00:17:32.605880 2101 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:17:33.706498 kubelet[2101]: I0712 00:17:33.706437 2101 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:17:33.706498 kubelet[2101]: I0712 00:17:33.706482 2101 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:17:33.706865 kubelet[2101]: I0712 00:17:33.706742 2101 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:17:33.798803 kubelet[2101]: E0712 00:17:33.798740 2101 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:33.800018 kubelet[2101]: I0712 00:17:33.799981 2101 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:33.808404 kubelet[2101]: E0712 00:17:33.808363 2101 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:17:33.808404 kubelet[2101]: I0712 00:17:33.808403 2101 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:17:33.812310 kubelet[2101]: I0712 00:17:33.812287 2101 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:17:33.813110 kubelet[2101]: I0712 00:17:33.813070 2101 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:17:33.813250 kubelet[2101]: I0712 00:17:33.813203 2101 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:17:33.813421 kubelet[2101]: I0712 00:17:33.813236 2101 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:17:33.813525 kubelet[2101]: I0712 00:17:33.813482 2101 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:17:33.813525 kubelet[2101]: I0712 00:17:33.813494 2101 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:17:33.813866 kubelet[2101]: I0712 00:17:33.813834 2101 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:33.818961 kubelet[2101]: I0712 00:17:33.818685 2101 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:17:33.818961 kubelet[2101]: I0712 00:17:33.818719 2101 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:17:33.818961 kubelet[2101]: I0712 00:17:33.818743 2101 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:17:33.818961 kubelet[2101]: I0712 00:17:33.818818 2101 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:17:33.822357 kubelet[2101]: W0712 00:17:33.822178 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:33.822357 kubelet[2101]: E0712 00:17:33.822255 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:33.823056 kubelet[2101]: W0712 00:17:33.822715 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:33.823056 kubelet[2101]: E0712 00:17:33.822759 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:33.823464 kubelet[2101]: I0712 00:17:33.823439 2101 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:17:33.825193 kubelet[2101]: I0712 00:17:33.825161 2101 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:17:33.825651 kubelet[2101]: W0712 00:17:33.825622 2101 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:17:33.827601 kubelet[2101]: I0712 00:17:33.827576 2101 server.go:1274] "Started kubelet" Jul 12 00:17:33.829519 kubelet[2101]: I0712 00:17:33.829170 2101 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:17:33.830067 kubelet[2101]: I0712 00:17:33.830034 2101 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:17:33.830147 kubelet[2101]: I0712 00:17:33.830124 2101 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:17:33.830147 kubelet[2101]: I0712 00:17:33.830165 2101 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:17:33.835094 kubelet[2101]: I0712 00:17:33.831267 2101 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:17:33.835094 kubelet[2101]: I0712 00:17:33.833940 2101 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:17:33.835724 kubelet[2101]: I0712 00:17:33.835564 2101 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:17:33.835724 kubelet[2101]: I0712 00:17:33.835689 2101 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:17:33.835874 kubelet[2101]: I0712 00:17:33.835746 2101 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:17:33.836338 kubelet[2101]: W0712 00:17:33.836225 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:33.836338 kubelet[2101]: E0712 00:17:33.836283 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:33.841747 kubelet[2101]: E0712 00:17:33.837959 2101 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.83:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.83:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185158e1189b32d4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:17:33.827543764 +0000 UTC m=+1.255441106,LastTimestamp:2025-07-12 00:17:33.827543764 +0000 UTC m=+1.255441106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:17:33.841747 kubelet[2101]: E0712 00:17:33.841126 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="200ms" Jul 12 00:17:33.842219 kubelet[2101]: E0712 00:17:33.842190 2101 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:17:33.843225 kubelet[2101]: E0712 00:17:33.842848 2101 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:33.843583 kubelet[2101]: I0712 00:17:33.843562 2101 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:17:33.845030 kubelet[2101]: I0712 00:17:33.844964 2101 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:17:33.845030 kubelet[2101]: I0712 00:17:33.844983 2101 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:17:33.854688 kubelet[2101]: I0712 00:17:33.854023 2101 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:17:33.855919 kubelet[2101]: I0712 00:17:33.855079 2101 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:17:33.855919 kubelet[2101]: I0712 00:17:33.855109 2101 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:17:33.855919 kubelet[2101]: I0712 00:17:33.855128 2101 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:17:33.855919 kubelet[2101]: E0712 00:17:33.855184 2101 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:17:33.858169 kubelet[2101]: W0712 00:17:33.858132 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:33.858863 kubelet[2101]: E0712 00:17:33.858686 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:33.859196 kubelet[2101]: I0712 00:17:33.859179 2101 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:17:33.859307 kubelet[2101]: I0712 00:17:33.859295 2101 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:17:33.859399 kubelet[2101]: I0712 00:17:33.859389 2101 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:33.861823 kubelet[2101]: I0712 00:17:33.861801 2101 policy_none.go:49] "None policy: Start" Jul 12 00:17:33.862568 kubelet[2101]: I0712 00:17:33.862506 2101 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:17:33.862568 kubelet[2101]: I0712 00:17:33.862556 2101 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:17:33.868380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:17:33.880669 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:17:33.883705 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:17:33.895907 kubelet[2101]: I0712 00:17:33.895289 2101 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:17:33.895907 kubelet[2101]: I0712 00:17:33.895507 2101 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:17:33.895907 kubelet[2101]: I0712 00:17:33.895545 2101 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:17:33.895907 kubelet[2101]: I0712 00:17:33.895852 2101 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:17:33.897186 kubelet[2101]: E0712 00:17:33.897093 2101 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:17:33.964880 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 12 00:17:33.986190 systemd[1]: Created slice kubepods-burstable-pod07a318c045de1343bd64cb11213dbf46.slice - libcontainer container kubepods-burstable-pod07a318c045de1343bd64cb11213dbf46.slice. Jul 12 00:17:33.990870 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 12 00:17:33.997702 kubelet[2101]: I0712 00:17:33.997631 2101 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:33.998152 kubelet[2101]: E0712 00:17:33.998114 2101 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 12 00:17:34.037472 kubelet[2101]: I0712 00:17:34.037423 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:34.037472 kubelet[2101]: I0712 00:17:34.037469 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:34.037646 kubelet[2101]: I0712 00:17:34.037489 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:34.037646 kubelet[2101]: I0712 00:17:34.037525 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:34.037646 kubelet[2101]: I0712 00:17:34.037546 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:34.037646 kubelet[2101]: I0712 00:17:34.037561 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07a318c045de1343bd64cb11213dbf46-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a318c045de1343bd64cb11213dbf46\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:34.037646 kubelet[2101]: I0712 00:17:34.037576 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07a318c045de1343bd64cb11213dbf46-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a318c045de1343bd64cb11213dbf46\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:34.037767 kubelet[2101]: I0712 00:17:34.037590 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07a318c045de1343bd64cb11213dbf46-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07a318c045de1343bd64cb11213dbf46\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:34.037767 kubelet[2101]: I0712 00:17:34.037603 2101 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:34.044968 kubelet[2101]: E0712 00:17:34.044917 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="400ms" Jul 12 00:17:34.199304 kubelet[2101]: I0712 00:17:34.199274 2101 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:34.199628 kubelet[2101]: E0712 00:17:34.199600 2101 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 12 00:17:34.283383 kubelet[2101]: E0712 00:17:34.283244 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:34.284332 containerd[1438]: time="2025-07-12T00:17:34.284148049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:34.289783 kubelet[2101]: E0712 00:17:34.289754 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:34.290389 containerd[1438]: time="2025-07-12T00:17:34.290248875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07a318c045de1343bd64cb11213dbf46,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:34.294227 kubelet[2101]: E0712 00:17:34.293728 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:34.294354 containerd[1438]: time="2025-07-12T00:17:34.294016019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:34.446314 kubelet[2101]: E0712 00:17:34.446227 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="800ms" Jul 12 00:17:34.601044 kubelet[2101]: I0712 00:17:34.600915 2101 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:34.601369 kubelet[2101]: E0712 00:17:34.601277 2101 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.83:6443/api/v1/nodes\": dial tcp 10.0.0.83:6443: connect: connection refused" node="localhost" Jul 12 00:17:34.674464 kubelet[2101]: W0712 00:17:34.674396 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:34.674464 kubelet[2101]: E0712 00:17:34.674466 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:34.752874 kubelet[2101]: W0712 00:17:34.752797 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:34.752874 kubelet[2101]: E0712 00:17:34.752870 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:34.824247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount585516862.mount: Deactivated successfully. Jul 12 00:17:34.828920 containerd[1438]: time="2025-07-12T00:17:34.828877170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:34.830278 containerd[1438]: time="2025-07-12T00:17:34.830241741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:17:34.832415 containerd[1438]: time="2025-07-12T00:17:34.832351348Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:34.833756 containerd[1438]: time="2025-07-12T00:17:34.833724361Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:34.834820 containerd[1438]: time="2025-07-12T00:17:34.834697712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:17:34.834820 containerd[1438]: time="2025-07-12T00:17:34.834715635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:34.835568 containerd[1438]: time="2025-07-12T00:17:34.835538602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:17:34.837729 containerd[1438]: time="2025-07-12T00:17:34.837677494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:17:34.838739 containerd[1438]: time="2025-07-12T00:17:34.838707614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.480752ms" Jul 12 00:17:34.839984 containerd[1438]: time="2025-07-12T00:17:34.839955847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.893261ms" Jul 12 00:17:34.843248 containerd[1438]: time="2025-07-12T00:17:34.843183988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.867782ms" Jul 12 00:17:34.980947 containerd[1438]: time="2025-07-12T00:17:34.980716547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:34.981729 containerd[1438]: time="2025-07-12T00:17:34.980875212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:34.981729 containerd[1438]: time="2025-07-12T00:17:34.981280475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:34.981729 containerd[1438]: time="2025-07-12T00:17:34.981468504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:34.982990 containerd[1438]: time="2025-07-12T00:17:34.982360482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:34.982990 containerd[1438]: time="2025-07-12T00:17:34.982765385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:34.982990 containerd[1438]: time="2025-07-12T00:17:34.982778067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:34.982990 containerd[1438]: time="2025-07-12T00:17:34.982852718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:34.986161 containerd[1438]: time="2025-07-12T00:17:34.986025370Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:34.986161 containerd[1438]: time="2025-07-12T00:17:34.986075418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:34.986161 containerd[1438]: time="2025-07-12T00:17:34.986092020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:34.986699 containerd[1438]: time="2025-07-12T00:17:34.986547531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:35.005703 systemd[1]: Started cri-containerd-26578a93048455ef71df3b0264165bec63e6dee6dc0c2e5ea57aeed07ac44d85.scope - libcontainer container 26578a93048455ef71df3b0264165bec63e6dee6dc0c2e5ea57aeed07ac44d85. Jul 12 00:17:35.006853 systemd[1]: Started cri-containerd-79b967df640a0c49361981835feb159dc34e1f9de43d0b00863787dbe20bd25b.scope - libcontainer container 79b967df640a0c49361981835feb159dc34e1f9de43d0b00863787dbe20bd25b. Jul 12 00:17:35.010346 systemd[1]: Started cri-containerd-13cafcd2a644a938ba524359fbdd28eb68711a73b535037191e9ac9103c1c3dd.scope - libcontainer container 13cafcd2a644a938ba524359fbdd28eb68711a73b535037191e9ac9103c1c3dd. Jul 12 00:17:35.037672 containerd[1438]: time="2025-07-12T00:17:35.037619668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:07a318c045de1343bd64cb11213dbf46,Namespace:kube-system,Attempt:0,} returns sandbox id \"26578a93048455ef71df3b0264165bec63e6dee6dc0c2e5ea57aeed07ac44d85\"" Jul 12 00:17:35.038759 kubelet[2101]: E0712 00:17:35.038731 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.041236 containerd[1438]: time="2025-07-12T00:17:35.041181203Z" level=info msg="CreateContainer within sandbox \"26578a93048455ef71df3b0264165bec63e6dee6dc0c2e5ea57aeed07ac44d85\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:17:35.044945 containerd[1438]: time="2025-07-12T00:17:35.044906202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b967df640a0c49361981835feb159dc34e1f9de43d0b00863787dbe20bd25b\"" Jul 12 00:17:35.046526 kubelet[2101]: E0712 00:17:35.046121 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.046596 containerd[1438]: time="2025-07-12T00:17:35.046461716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"13cafcd2a644a938ba524359fbdd28eb68711a73b535037191e9ac9103c1c3dd\"" Jul 12 00:17:35.047508 kubelet[2101]: E0712 00:17:35.047466 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.048186 containerd[1438]: time="2025-07-12T00:17:35.048145529Z" level=info msg="CreateContainer within sandbox \"79b967df640a0c49361981835feb159dc34e1f9de43d0b00863787dbe20bd25b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:17:35.050461 containerd[1438]: time="2025-07-12T00:17:35.050428592Z" level=info msg="CreateContainer within sandbox \"13cafcd2a644a938ba524359fbdd28eb68711a73b535037191e9ac9103c1c3dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:17:35.069482 containerd[1438]: time="2025-07-12T00:17:35.069434166Z" level=info msg="CreateContainer within sandbox \"26578a93048455ef71df3b0264165bec63e6dee6dc0c2e5ea57aeed07ac44d85\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d558bff93a646c2e2a777730e8dc58ee9973c5dd1fffa55e689763a9182ffbe7\"" Jul 12 00:17:35.070527 containerd[1438]: time="2025-07-12T00:17:35.070190839Z" level=info msg="StartContainer for \"d558bff93a646c2e2a777730e8dc58ee9973c5dd1fffa55e689763a9182ffbe7\"" Jul 12 00:17:35.076873 containerd[1438]: time="2025-07-12T00:17:35.076834197Z" level=info msg="CreateContainer within sandbox \"79b967df640a0c49361981835feb159dc34e1f9de43d0b00863787dbe20bd25b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c62e16d185ab2d69ece132aa4d0dab9e2438a63f9872e5498f246d6544384ada\"" Jul 12 00:17:35.077925 containerd[1438]: time="2025-07-12T00:17:35.077869193Z" level=info msg="StartContainer for \"c62e16d185ab2d69ece132aa4d0dab9e2438a63f9872e5498f246d6544384ada\"" Jul 12 00:17:35.080427 containerd[1438]: time="2025-07-12T00:17:35.080375049Z" level=info msg="CreateContainer within sandbox \"13cafcd2a644a938ba524359fbdd28eb68711a73b535037191e9ac9103c1c3dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0eec85920b3eb9e8dbff9b83e7e5596ff5b1375ff3217ac56028675e9c10ce35\"" Jul 12 00:17:35.080892 containerd[1438]: time="2025-07-12T00:17:35.080821116Z" level=info msg="StartContainer for \"0eec85920b3eb9e8dbff9b83e7e5596ff5b1375ff3217ac56028675e9c10ce35\"" Jul 12 00:17:35.097733 systemd[1]: Started cri-containerd-d558bff93a646c2e2a777730e8dc58ee9973c5dd1fffa55e689763a9182ffbe7.scope - libcontainer container d558bff93a646c2e2a777730e8dc58ee9973c5dd1fffa55e689763a9182ffbe7. Jul 12 00:17:35.100988 systemd[1]: Started cri-containerd-c62e16d185ab2d69ece132aa4d0dab9e2438a63f9872e5498f246d6544384ada.scope - libcontainer container c62e16d185ab2d69ece132aa4d0dab9e2438a63f9872e5498f246d6544384ada. Jul 12 00:17:35.108272 systemd[1]: Started cri-containerd-0eec85920b3eb9e8dbff9b83e7e5596ff5b1375ff3217ac56028675e9c10ce35.scope - libcontainer container 0eec85920b3eb9e8dbff9b83e7e5596ff5b1375ff3217ac56028675e9c10ce35. Jul 12 00:17:35.126090 kubelet[2101]: W0712 00:17:35.125985 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:35.126090 kubelet[2101]: E0712 00:17:35.126034 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:35.146254 containerd[1438]: time="2025-07-12T00:17:35.144811045Z" level=info msg="StartContainer for \"d558bff93a646c2e2a777730e8dc58ee9973c5dd1fffa55e689763a9182ffbe7\" returns successfully" Jul 12 00:17:35.146254 containerd[1438]: time="2025-07-12T00:17:35.144905820Z" level=info msg="StartContainer for \"c62e16d185ab2d69ece132aa4d0dab9e2438a63f9872e5498f246d6544384ada\" returns successfully" Jul 12 00:17:35.155141 containerd[1438]: time="2025-07-12T00:17:35.155040421Z" level=info msg="StartContainer for \"0eec85920b3eb9e8dbff9b83e7e5596ff5b1375ff3217ac56028675e9c10ce35\" returns successfully" Jul 12 00:17:35.247495 kubelet[2101]: E0712 00:17:35.247337 2101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.83:6443: connect: connection refused" interval="1.6s" Jul 12 00:17:35.315248 kubelet[2101]: W0712 00:17:35.315145 2101 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.83:6443: connect: connection refused Jul 12 00:17:35.315426 kubelet[2101]: E0712 00:17:35.315334 2101 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.83:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:17:35.404239 kubelet[2101]: I0712 00:17:35.404208 2101 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:35.873797 kubelet[2101]: E0712 00:17:35.873379 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.875103 kubelet[2101]: E0712 00:17:35.874665 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:35.879635 kubelet[2101]: E0712 00:17:35.877393 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:36.686360 kubelet[2101]: I0712 00:17:36.686320 2101 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:17:36.686360 kubelet[2101]: E0712 00:17:36.686360 2101 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:17:36.700538 kubelet[2101]: E0712 00:17:36.700242 2101 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:36.800943 kubelet[2101]: E0712 00:17:36.800880 2101 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:17:36.883217 kubelet[2101]: E0712 00:17:36.883165 2101 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:36.883647 kubelet[2101]: E0712 00:17:36.883348 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:37.822755 kubelet[2101]: I0712 00:17:37.822687 2101 apiserver.go:52] "Watching apiserver" Jul 12 00:17:37.836035 kubelet[2101]: I0712 00:17:37.835992 2101 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:17:38.961402 kubelet[2101]: E0712 00:17:38.961357 2101 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:39.027583 systemd[1]: Reloading requested from client PID 2379 ('systemctl') (unit session-7.scope)... Jul 12 00:17:39.027600 systemd[1]: Reloading... Jul 12 00:17:39.092663 zram_generator::config[2421]: No configuration found. Jul 12 00:17:39.186449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:17:39.252445 systemd[1]: Reloading finished in 224 ms. Jul 12 00:17:39.286351 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:39.294020 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:17:39.295598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:39.295660 systemd[1]: kubelet.service: Consumed 1.644s CPU time, 135.2M memory peak, 0B memory swap peak. Jul 12 00:17:39.310940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:17:39.415217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:17:39.421009 (kubelet)[2460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:17:39.462655 kubelet[2460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:39.462655 kubelet[2460]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 12 00:17:39.462655 kubelet[2460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:17:39.463033 kubelet[2460]: I0712 00:17:39.462704 2460 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:17:39.469992 kubelet[2460]: I0712 00:17:39.469809 2460 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 12 00:17:39.469992 kubelet[2460]: I0712 00:17:39.469849 2460 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:17:39.472123 kubelet[2460]: I0712 00:17:39.472080 2460 server.go:934] "Client rotation is on, will bootstrap in background" Jul 12 00:17:39.474354 kubelet[2460]: I0712 00:17:39.474322 2460 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:17:39.477049 kubelet[2460]: I0712 00:17:39.477007 2460 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:17:39.480047 kubelet[2460]: E0712 00:17:39.480007 2460 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:17:39.480047 kubelet[2460]: I0712 00:17:39.480047 2460 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:17:39.482624 kubelet[2460]: I0712 00:17:39.482592 2460 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:17:39.482741 kubelet[2460]: I0712 00:17:39.482720 2460 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 12 00:17:39.482877 kubelet[2460]: I0712 00:17:39.482843 2460 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:17:39.483060 kubelet[2460]: I0712 00:17:39.482873 2460 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:17:39.483136 kubelet[2460]: I0712 00:17:39.483064 2460 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:17:39.483136 kubelet[2460]: I0712 00:17:39.483074 2460 container_manager_linux.go:300] "Creating device plugin manager" Jul 12 00:17:39.483136 kubelet[2460]: I0712 00:17:39.483109 2460 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:39.483227 kubelet[2460]: I0712 00:17:39.483214 2460 kubelet.go:408] "Attempting to sync node with API server" Jul 12 00:17:39.483255 kubelet[2460]: I0712 00:17:39.483231 2460 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:17:39.483255 kubelet[2460]: I0712 00:17:39.483250 2460 kubelet.go:314] "Adding apiserver pod source" Jul 12 00:17:39.483306 kubelet[2460]: I0712 00:17:39.483276 2460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:17:39.484183 kubelet[2460]: I0712 00:17:39.484161 2460 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:17:39.484960 kubelet[2460]: I0712 00:17:39.484929 2460 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:17:39.485467 kubelet[2460]: I0712 00:17:39.485393 2460 server.go:1274] "Started kubelet" Jul 12 00:17:39.489595 kubelet[2460]: I0712 00:17:39.488345 2460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:17:39.489595 kubelet[2460]: I0712 00:17:39.489035 2460 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:17:39.489595 kubelet[2460]: I0712 00:17:39.489546 2460 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:17:39.495524 kubelet[2460]: I0712 00:17:39.495494 2460 server.go:449] "Adding debug handlers to kubelet server" Jul 12 00:17:39.497152 kubelet[2460]: I0712 00:17:39.497124 2460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:17:39.503550 kubelet[2460]: E0712 00:17:39.503443 2460 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:17:39.504877 kubelet[2460]: I0712 00:17:39.504853 2460 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 12 00:17:39.505393 kubelet[2460]: I0712 00:17:39.505330 2460 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:17:39.506083 kubelet[2460]: I0712 00:17:39.506063 2460 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:17:39.506179 kubelet[2460]: I0712 00:17:39.506168 2460 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 12 00:17:39.508654 kubelet[2460]: I0712 00:17:39.508633 2460 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:17:39.508758 kubelet[2460]: I0712 00:17:39.508747 2460 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:17:39.508903 kubelet[2460]: I0712 00:17:39.508883 2460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:17:39.514323 kubelet[2460]: I0712 00:17:39.514275 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:17:39.515248 kubelet[2460]: I0712 00:17:39.515207 2460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:17:39.515325 kubelet[2460]: I0712 00:17:39.515254 2460 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 12 00:17:39.515325 kubelet[2460]: I0712 00:17:39.515280 2460 kubelet.go:2321] "Starting kubelet main sync loop" Jul 12 00:17:39.515383 kubelet[2460]: E0712 00:17:39.515327 2460 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:17:39.543441 kubelet[2460]: I0712 00:17:39.543411 2460 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 12 00:17:39.543441 kubelet[2460]: I0712 00:17:39.543434 2460 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 12 00:17:39.543441 kubelet[2460]: I0712 00:17:39.543455 2460 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:17:39.543649 kubelet[2460]: I0712 00:17:39.543631 2460 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:17:39.543676 kubelet[2460]: I0712 00:17:39.543650 2460 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:17:39.543676 kubelet[2460]: I0712 00:17:39.543670 2460 policy_none.go:49] "None policy: Start" Jul 12 00:17:39.544342 kubelet[2460]: I0712 00:17:39.544329 2460 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 12 00:17:39.544400 kubelet[2460]: I0712 00:17:39.544350 2460 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:17:39.544486 kubelet[2460]: I0712 00:17:39.544472 2460 state_mem.go:75] "Updated machine memory state" Jul 12 00:17:39.548598 kubelet[2460]: I0712 00:17:39.548452 2460 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:17:39.548866 kubelet[2460]: I0712 00:17:39.548837 2460 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:17:39.549194 kubelet[2460]: I0712 00:17:39.548856 2460 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:17:39.550006 kubelet[2460]: I0712 00:17:39.549930 2460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:17:39.624369 kubelet[2460]: E0712 00:17:39.624331 2460 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:39.656416 kubelet[2460]: I0712 00:17:39.656266 2460 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 12 00:17:39.664638 kubelet[2460]: I0712 00:17:39.664609 2460 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 12 00:17:39.664731 kubelet[2460]: I0712 00:17:39.664696 2460 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 12 00:17:39.808440 kubelet[2460]: I0712 00:17:39.808405 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07a318c045de1343bd64cb11213dbf46-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a318c045de1343bd64cb11213dbf46\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:39.808590 kubelet[2460]: I0712 00:17:39.808449 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:39.808590 kubelet[2460]: I0712 00:17:39.808486 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:39.808590 kubelet[2460]: I0712 00:17:39.808525 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:39.808590 kubelet[2460]: I0712 00:17:39.808554 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:17:39.808590 kubelet[2460]: I0712 00:17:39.808579 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07a318c045de1343bd64cb11213dbf46-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"07a318c045de1343bd64cb11213dbf46\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:39.808712 kubelet[2460]: I0712 00:17:39.808597 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07a318c045de1343bd64cb11213dbf46-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"07a318c045de1343bd64cb11213dbf46\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:39.808712 kubelet[2460]: I0712 00:17:39.808613 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:39.808712 kubelet[2460]: I0712 00:17:39.808628 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:17:39.923318 kubelet[2460]: E0712 00:17:39.923275 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:39.923470 kubelet[2460]: E0712 00:17:39.923452 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:39.925421 kubelet[2460]: E0712 00:17:39.925383 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.028014 sudo[2500]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:17:40.028297 sudo[2500]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:17:40.452036 sudo[2500]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:40.484687 kubelet[2460]: I0712 00:17:40.484646 2460 apiserver.go:52] "Watching apiserver" Jul 12 00:17:40.506862 kubelet[2460]: I0712 00:17:40.506821 2460 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 12 00:17:40.529968 kubelet[2460]: E0712 00:17:40.529014 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.529968 kubelet[2460]: E0712 00:17:40.529237 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.542431 kubelet[2460]: E0712 00:17:40.542206 2460 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 00:17:40.542954 kubelet[2460]: E0712 00:17:40.542917 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:40.575389 kubelet[2460]: I0712 00:17:40.575306 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.5752857909999998 podStartE2EDuration="2.575285791s" podCreationTimestamp="2025-07-12 00:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:40.566177144 +0000 UTC m=+1.141846591" watchObservedRunningTime="2025-07-12 00:17:40.575285791 +0000 UTC m=+1.150955238" Jul 12 00:17:40.583645 kubelet[2460]: I0712 00:17:40.583580 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.583559331 podStartE2EDuration="1.583559331s" podCreationTimestamp="2025-07-12 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:40.575263868 +0000 UTC m=+1.150933315" watchObservedRunningTime="2025-07-12 00:17:40.583559331 +0000 UTC m=+1.159228778" Jul 12 00:17:40.596961 kubelet[2460]: I0712 00:17:40.596695 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5966748910000002 podStartE2EDuration="1.596674891s" podCreationTimestamp="2025-07-12 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:40.583687307 +0000 UTC m=+1.159356834" watchObservedRunningTime="2025-07-12 00:17:40.596674891 +0000 UTC m=+1.172344338" Jul 12 00:17:41.530433 kubelet[2460]: E0712 00:17:41.530171 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:41.530433 kubelet[2460]: E0712 00:17:41.530258 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:41.530433 kubelet[2460]: E0712 00:17:41.530412 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:42.192783 sudo[1622]: pam_unix(sudo:session): session closed for user root Jul 12 00:17:42.194311 sshd[1619]: pam_unix(sshd:session): session closed for user core Jul 12 00:17:42.197947 systemd[1]: sshd@6-10.0.0.83:22-10.0.0.1:41608.service: Deactivated successfully. Jul 12 00:17:42.199727 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:17:42.199884 systemd[1]: session-7.scope: Consumed 6.083s CPU time, 153.5M memory peak, 0B memory swap peak. Jul 12 00:17:42.200468 systemd-logind[1423]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:17:42.201271 systemd-logind[1423]: Removed session 7. Jul 12 00:17:42.531470 kubelet[2460]: E0712 00:17:42.531353 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:43.533878 kubelet[2460]: E0712 00:17:43.533789 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:45.104368 kubelet[2460]: I0712 00:17:45.104326 2460 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:17:45.105164 containerd[1438]: time="2025-07-12T00:17:45.105001911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:17:45.105539 kubelet[2460]: I0712 00:17:45.105177 2460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:17:46.013570 systemd[1]: Created slice kubepods-besteffort-pod4a0cd756_e149_4cc6_b050_67fa165b08c9.slice - libcontainer container kubepods-besteffort-pod4a0cd756_e149_4cc6_b050_67fa165b08c9.slice. Jul 12 00:17:46.041799 systemd[1]: Created slice kubepods-burstable-podedb7190b_198e_4584_9006_49ea632f777a.slice - libcontainer container kubepods-burstable-podedb7190b_198e_4584_9006_49ea632f777a.slice. Jul 12 00:17:46.048442 kubelet[2460]: I0712 00:17:46.048408 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgplc\" (UniqueName: \"kubernetes.io/projected/4a0cd756-e149-4cc6-b050-67fa165b08c9-kube-api-access-vgplc\") pod \"kube-proxy-hk9s9\" (UID: \"4a0cd756-e149-4cc6-b050-67fa165b08c9\") " pod="kube-system/kube-proxy-hk9s9" Jul 12 00:17:46.048659 kubelet[2460]: I0712 00:17:46.048642 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cni-path\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.048834 kubelet[2460]: I0712 00:17:46.048780 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-hubble-tls\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.048834 kubelet[2460]: I0712 00:17:46.048807 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a0cd756-e149-4cc6-b050-67fa165b08c9-kube-proxy\") pod \"kube-proxy-hk9s9\" (UID: \"4a0cd756-e149-4cc6-b050-67fa165b08c9\") " pod="kube-system/kube-proxy-hk9s9" Jul 12 00:17:46.049113 kubelet[2460]: I0712 00:17:46.048825 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-xtables-lock\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049113 kubelet[2460]: I0712 00:17:46.049005 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edb7190b-198e-4584-9006-49ea632f777a-cilium-config-path\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049113 kubelet[2460]: I0712 00:17:46.049021 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-kernel\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049113 kubelet[2460]: I0712 00:17:46.049074 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ljwb\" (UniqueName: \"kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-kube-api-access-8ljwb\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049113 kubelet[2460]: I0712 00:17:46.049093 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-bpf-maps\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049389 kubelet[2460]: I0712 00:17:46.049278 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-hostproc\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049389 kubelet[2460]: I0712 00:17:46.049301 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-etc-cni-netd\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049389 kubelet[2460]: I0712 00:17:46.049337 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a0cd756-e149-4cc6-b050-67fa165b08c9-lib-modules\") pod \"kube-proxy-hk9s9\" (UID: \"4a0cd756-e149-4cc6-b050-67fa165b08c9\") " pod="kube-system/kube-proxy-hk9s9" Jul 12 00:17:46.049389 kubelet[2460]: I0712 00:17:46.049355 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a0cd756-e149-4cc6-b050-67fa165b08c9-xtables-lock\") pod \"kube-proxy-hk9s9\" (UID: \"4a0cd756-e149-4cc6-b050-67fa165b08c9\") " pod="kube-system/kube-proxy-hk9s9" Jul 12 00:17:46.049389 kubelet[2460]: I0712 00:17:46.049369 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-run\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049839 kubelet[2460]: I0712 00:17:46.049598 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-lib-modules\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049839 kubelet[2460]: I0712 00:17:46.049623 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edb7190b-198e-4584-9006-49ea632f777a-clustermesh-secrets\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049839 kubelet[2460]: I0712 00:17:46.049785 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-cgroup\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.049839 kubelet[2460]: I0712 00:17:46.049810 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-net\") pod \"cilium-skwbf\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " pod="kube-system/cilium-skwbf" Jul 12 00:17:46.252075 kubelet[2460]: I0712 00:17:46.252013 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/699bece4-ebf8-4df7-8103-b3358eb38e0a-cilium-config-path\") pod \"cilium-operator-5d85765b45-6whj8\" (UID: \"699bece4-ebf8-4df7-8103-b3358eb38e0a\") " pod="kube-system/cilium-operator-5d85765b45-6whj8" Jul 12 00:17:46.252075 kubelet[2460]: I0712 00:17:46.252067 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7n49\" (UniqueName: \"kubernetes.io/projected/699bece4-ebf8-4df7-8103-b3358eb38e0a-kube-api-access-t7n49\") pod \"cilium-operator-5d85765b45-6whj8\" (UID: \"699bece4-ebf8-4df7-8103-b3358eb38e0a\") " pod="kube-system/cilium-operator-5d85765b45-6whj8" Jul 12 00:17:46.255788 systemd[1]: Created slice kubepods-besteffort-pod699bece4_ebf8_4df7_8103_b3358eb38e0a.slice - libcontainer container kubepods-besteffort-pod699bece4_ebf8_4df7_8103_b3358eb38e0a.slice. Jul 12 00:17:46.338604 kubelet[2460]: E0712 00:17:46.337981 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.339007 containerd[1438]: time="2025-07-12T00:17:46.338962861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk9s9,Uid:4a0cd756-e149-4cc6-b050-67fa165b08c9,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:46.346568 kubelet[2460]: E0712 00:17:46.346362 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.347162 containerd[1438]: time="2025-07-12T00:17:46.346905463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-skwbf,Uid:edb7190b-198e-4584-9006-49ea632f777a,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:46.378648 containerd[1438]: time="2025-07-12T00:17:46.378176814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:46.378648 containerd[1438]: time="2025-07-12T00:17:46.378573536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:46.378648 containerd[1438]: time="2025-07-12T00:17:46.378595339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:46.378849 containerd[1438]: time="2025-07-12T00:17:46.378675267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:46.398706 systemd[1]: Started cri-containerd-8ecacd588e8cc6bcfed6050fb908f34ec6fbf15450f2ea79e73e1a40f3b2d09b.scope - libcontainer container 8ecacd588e8cc6bcfed6050fb908f34ec6fbf15450f2ea79e73e1a40f3b2d09b. Jul 12 00:17:46.407870 containerd[1438]: time="2025-07-12T00:17:46.407725744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:46.407870 containerd[1438]: time="2025-07-12T00:17:46.407826915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:46.407870 containerd[1438]: time="2025-07-12T00:17:46.407842956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:46.408054 containerd[1438]: time="2025-07-12T00:17:46.407935566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:46.433796 systemd[1]: Started cri-containerd-a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b.scope - libcontainer container a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b. Jul 12 00:17:46.434857 containerd[1438]: time="2025-07-12T00:17:46.434804372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hk9s9,Uid:4a0cd756-e149-4cc6-b050-67fa165b08c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ecacd588e8cc6bcfed6050fb908f34ec6fbf15450f2ea79e73e1a40f3b2d09b\"" Jul 12 00:17:46.436052 kubelet[2460]: E0712 00:17:46.436016 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.439424 containerd[1438]: time="2025-07-12T00:17:46.439384897Z" level=info msg="CreateContainer within sandbox \"8ecacd588e8cc6bcfed6050fb908f34ec6fbf15450f2ea79e73e1a40f3b2d09b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:17:46.458588 containerd[1438]: time="2025-07-12T00:17:46.458544206Z" level=info msg="CreateContainer within sandbox \"8ecacd588e8cc6bcfed6050fb908f34ec6fbf15450f2ea79e73e1a40f3b2d09b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42df52953ee34f6786e9ff64c70d19abcc17ad4b317997bde899f093f84c5fb4\"" Jul 12 00:17:46.462074 containerd[1438]: time="2025-07-12T00:17:46.461262934Z" level=info msg="StartContainer for \"42df52953ee34f6786e9ff64c70d19abcc17ad4b317997bde899f093f84c5fb4\"" Jul 12 00:17:46.467156 containerd[1438]: time="2025-07-12T00:17:46.467020504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-skwbf,Uid:edb7190b-198e-4584-9006-49ea632f777a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\"" Jul 12 00:17:46.467920 kubelet[2460]: E0712 00:17:46.467895 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.469148 containerd[1438]: time="2025-07-12T00:17:46.469092963Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:17:46.496739 systemd[1]: Started cri-containerd-42df52953ee34f6786e9ff64c70d19abcc17ad4b317997bde899f093f84c5fb4.scope - libcontainer container 42df52953ee34f6786e9ff64c70d19abcc17ad4b317997bde899f093f84c5fb4. Jul 12 00:17:46.524427 containerd[1438]: time="2025-07-12T00:17:46.524247805Z" level=info msg="StartContainer for \"42df52953ee34f6786e9ff64c70d19abcc17ad4b317997bde899f093f84c5fb4\" returns successfully" Jul 12 00:17:46.547299 kubelet[2460]: E0712 00:17:46.546824 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.559729 kubelet[2460]: I0712 00:17:46.559669 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hk9s9" podStartSLOduration=1.559648194 podStartE2EDuration="1.559648194s" podCreationTimestamp="2025-07-12 00:17:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:17:46.557128927 +0000 UTC m=+7.132798374" watchObservedRunningTime="2025-07-12 00:17:46.559648194 +0000 UTC m=+7.135317641" Jul 12 00:17:46.562223 kubelet[2460]: E0712 00:17:46.562188 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:46.562861 containerd[1438]: time="2025-07-12T00:17:46.562777405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6whj8,Uid:699bece4-ebf8-4df7-8103-b3358eb38e0a,Namespace:kube-system,Attempt:0,}" Jul 12 00:17:46.597459 containerd[1438]: time="2025-07-12T00:17:46.597018432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:17:46.597459 containerd[1438]: time="2025-07-12T00:17:46.597088479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:17:46.597459 containerd[1438]: time="2025-07-12T00:17:46.597104521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:46.597459 containerd[1438]: time="2025-07-12T00:17:46.597192490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:17:46.620717 systemd[1]: Started cri-containerd-8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2.scope - libcontainer container 8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2. Jul 12 00:17:46.664985 containerd[1438]: time="2025-07-12T00:17:46.664931865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6whj8,Uid:699bece4-ebf8-4df7-8103-b3358eb38e0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2\"" Jul 12 00:17:46.665752 kubelet[2460]: E0712 00:17:46.665731 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:51.282231 kubelet[2460]: E0712 00:17:51.280881 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:51.537059 kubelet[2460]: E0712 00:17:51.536946 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:52.628084 kubelet[2460]: E0712 00:17:52.627563 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:53.587935 kubelet[2460]: E0712 00:17:53.587803 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:54.663089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545907539.mount: Deactivated successfully. Jul 12 00:17:56.034412 containerd[1438]: time="2025-07-12T00:17:56.034347442Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:56.034966 containerd[1438]: time="2025-07-12T00:17:56.034900005Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 00:17:56.036286 containerd[1438]: time="2025-07-12T00:17:56.036244628Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:56.038205 containerd[1438]: time="2025-07-12T00:17:56.038155856Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.569018448s" Jul 12 00:17:56.038205 containerd[1438]: time="2025-07-12T00:17:56.038201099Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:17:56.041131 containerd[1438]: time="2025-07-12T00:17:56.041088442Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:17:56.049393 containerd[1438]: time="2025-07-12T00:17:56.049266792Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:17:56.083223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629097497.mount: Deactivated successfully. Jul 12 00:17:56.087623 containerd[1438]: time="2025-07-12T00:17:56.087580426Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\"" Jul 12 00:17:56.087968 containerd[1438]: time="2025-07-12T00:17:56.087941454Z" level=info msg="StartContainer for \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\"" Jul 12 00:17:56.125149 systemd[1]: Started cri-containerd-fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48.scope - libcontainer container fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48. Jul 12 00:17:56.191825 containerd[1438]: time="2025-07-12T00:17:56.191783301Z" level=info msg="StartContainer for \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\" returns successfully" Jul 12 00:17:56.222722 systemd[1]: cri-containerd-fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48.scope: Deactivated successfully. Jul 12 00:17:56.269761 containerd[1438]: time="2025-07-12T00:17:56.264751687Z" level=info msg="shim disconnected" id=fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48 namespace=k8s.io Jul 12 00:17:56.269761 containerd[1438]: time="2025-07-12T00:17:56.269635823Z" level=warning msg="cleaning up after shim disconnected" id=fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48 namespace=k8s.io Jul 12 00:17:56.269761 containerd[1438]: time="2025-07-12T00:17:56.269649384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:56.570705 kubelet[2460]: E0712 00:17:56.570664 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:56.572710 containerd[1438]: time="2025-07-12T00:17:56.572664987Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:17:56.590830 containerd[1438]: time="2025-07-12T00:17:56.590781584Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\"" Jul 12 00:17:56.591431 containerd[1438]: time="2025-07-12T00:17:56.591389551Z" level=info msg="StartContainer for \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\"" Jul 12 00:17:56.620701 systemd[1]: Started cri-containerd-173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1.scope - libcontainer container 173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1. Jul 12 00:17:56.644664 containerd[1438]: time="2025-07-12T00:17:56.644612135Z" level=info msg="StartContainer for \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\" returns successfully" Jul 12 00:17:56.665440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:17:56.665668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:56.665875 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:56.673043 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:17:56.673243 systemd[1]: cri-containerd-173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1.scope: Deactivated successfully. Jul 12 00:17:56.697532 containerd[1438]: time="2025-07-12T00:17:56.697460609Z" level=info msg="shim disconnected" id=173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1 namespace=k8s.io Jul 12 00:17:56.697532 containerd[1438]: time="2025-07-12T00:17:56.697539255Z" level=warning msg="cleaning up after shim disconnected" id=173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1 namespace=k8s.io Jul 12 00:17:56.698033 containerd[1438]: time="2025-07-12T00:17:56.697548696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:56.719626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:17:57.080763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48-rootfs.mount: Deactivated successfully. Jul 12 00:17:57.096471 update_engine[1427]: I20250712 00:17:57.096413 1427 update_attempter.cc:509] Updating boot flags... Jul 12 00:17:57.112961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878602031.mount: Deactivated successfully. Jul 12 00:17:57.120567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2996) Jul 12 00:17:57.161478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2998) Jul 12 00:17:57.192536 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2998) Jul 12 00:17:57.492502 containerd[1438]: time="2025-07-12T00:17:57.492270468Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:57.493033 containerd[1438]: time="2025-07-12T00:17:57.492981881Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 00:17:57.493863 containerd[1438]: time="2025-07-12T00:17:57.493836465Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:17:57.495278 containerd[1438]: time="2025-07-12T00:17:57.495224489Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.454091644s" Jul 12 00:17:57.495278 containerd[1438]: time="2025-07-12T00:17:57.495262052Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:17:57.497815 containerd[1438]: time="2025-07-12T00:17:57.497673512Z" level=info msg="CreateContainer within sandbox \"8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:17:57.525133 containerd[1438]: time="2025-07-12T00:17:57.525015634Z" level=info msg="CreateContainer within sandbox \"8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\"" Jul 12 00:17:57.525661 containerd[1438]: time="2025-07-12T00:17:57.525633720Z" level=info msg="StartContainer for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\"" Jul 12 00:17:57.554679 systemd[1]: Started cri-containerd-4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e.scope - libcontainer container 4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e. Jul 12 00:17:57.576261 containerd[1438]: time="2025-07-12T00:17:57.575784106Z" level=info msg="StartContainer for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" returns successfully" Jul 12 00:17:57.578262 kubelet[2460]: E0712 00:17:57.578149 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:57.583106 containerd[1438]: time="2025-07-12T00:17:57.582772788Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:17:57.584118 kubelet[2460]: E0712 00:17:57.584091 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:57.640796 kubelet[2460]: I0712 00:17:57.640730 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6whj8" podStartSLOduration=0.810813165 podStartE2EDuration="11.640706355s" podCreationTimestamp="2025-07-12 00:17:46 +0000 UTC" firstStartedPulling="2025-07-12 00:17:46.666354255 +0000 UTC m=+7.242023662" lastFinishedPulling="2025-07-12 00:17:57.496247405 +0000 UTC m=+18.071916852" observedRunningTime="2025-07-12 00:17:57.623956984 +0000 UTC m=+18.199626431" watchObservedRunningTime="2025-07-12 00:17:57.640706355 +0000 UTC m=+18.216375802" Jul 12 00:17:57.679083 containerd[1438]: time="2025-07-12T00:17:57.678949812Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\"" Jul 12 00:17:57.680089 containerd[1438]: time="2025-07-12T00:17:57.679828677Z" level=info msg="StartContainer for \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\"" Jul 12 00:17:57.713603 systemd[1]: Started cri-containerd-412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c.scope - libcontainer container 412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c. Jul 12 00:17:57.771365 containerd[1438]: time="2025-07-12T00:17:57.771252706Z" level=info msg="StartContainer for \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\" returns successfully" Jul 12 00:17:57.795127 systemd[1]: cri-containerd-412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c.scope: Deactivated successfully. Jul 12 00:17:57.822015 containerd[1438]: time="2025-07-12T00:17:57.821805802Z" level=info msg="shim disconnected" id=412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c namespace=k8s.io Jul 12 00:17:57.822015 containerd[1438]: time="2025-07-12T00:17:57.821858326Z" level=warning msg="cleaning up after shim disconnected" id=412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c namespace=k8s.io Jul 12 00:17:57.822015 containerd[1438]: time="2025-07-12T00:17:57.821866566Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:58.588243 kubelet[2460]: E0712 00:17:58.588196 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:58.588243 kubelet[2460]: E0712 00:17:58.588899 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:58.592732 containerd[1438]: time="2025-07-12T00:17:58.590725299Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:17:58.631043 containerd[1438]: time="2025-07-12T00:17:58.630980012Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\"" Jul 12 00:17:58.632597 containerd[1438]: time="2025-07-12T00:17:58.631696544Z" level=info msg="StartContainer for \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\"" Jul 12 00:17:58.668695 systemd[1]: Started cri-containerd-087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194.scope - libcontainer container 087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194. Jul 12 00:17:58.689833 systemd[1]: cri-containerd-087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194.scope: Deactivated successfully. Jul 12 00:17:58.690646 containerd[1438]: time="2025-07-12T00:17:58.690613487Z" level=info msg="StartContainer for \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\" returns successfully" Jul 12 00:17:58.716583 containerd[1438]: time="2025-07-12T00:17:58.716505640Z" level=info msg="shim disconnected" id=087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194 namespace=k8s.io Jul 12 00:17:58.716583 containerd[1438]: time="2025-07-12T00:17:58.716579086Z" level=warning msg="cleaning up after shim disconnected" id=087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194 namespace=k8s.io Jul 12 00:17:58.716583 containerd[1438]: time="2025-07-12T00:17:58.716589286Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:17:59.080420 systemd[1]: run-containerd-runc-k8s.io-087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194-runc.NDruPL.mount: Deactivated successfully. Jul 12 00:17:59.080525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194-rootfs.mount: Deactivated successfully. Jul 12 00:17:59.594304 kubelet[2460]: E0712 00:17:59.593989 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:17:59.597327 containerd[1438]: time="2025-07-12T00:17:59.597285305Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:17:59.619088 containerd[1438]: time="2025-07-12T00:17:59.619031909Z" level=info msg="CreateContainer within sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\"" Jul 12 00:17:59.619600 containerd[1438]: time="2025-07-12T00:17:59.619569267Z" level=info msg="StartContainer for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\"" Jul 12 00:17:59.647699 systemd[1]: Started cri-containerd-24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9.scope - libcontainer container 24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9. Jul 12 00:17:59.670042 containerd[1438]: time="2025-07-12T00:17:59.669985041Z" level=info msg="StartContainer for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" returns successfully" Jul 12 00:17:59.785236 kubelet[2460]: I0712 00:17:59.785187 2460 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 12 00:17:59.823211 systemd[1]: Created slice kubepods-burstable-pod07300eb5_c5ef_49e9_b87b_d18a0e155517.slice - libcontainer container kubepods-burstable-pod07300eb5_c5ef_49e9_b87b_d18a0e155517.slice. Jul 12 00:17:59.832759 systemd[1]: Created slice kubepods-burstable-pod3e229b8b_7da0_4eac_b348_de0ddbefbe29.slice - libcontainer container kubepods-burstable-pod3e229b8b_7da0_4eac_b348_de0ddbefbe29.slice. Jul 12 00:17:59.949404 kubelet[2460]: I0712 00:17:59.949274 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e229b8b-7da0-4eac-b348-de0ddbefbe29-config-volume\") pod \"coredns-7c65d6cfc9-w8cl9\" (UID: \"3e229b8b-7da0-4eac-b348-de0ddbefbe29\") " pod="kube-system/coredns-7c65d6cfc9-w8cl9" Jul 12 00:17:59.949404 kubelet[2460]: I0712 00:17:59.949333 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p42x\" (UniqueName: \"kubernetes.io/projected/07300eb5-c5ef-49e9-b87b-d18a0e155517-kube-api-access-8p42x\") pod \"coredns-7c65d6cfc9-hx4lr\" (UID: \"07300eb5-c5ef-49e9-b87b-d18a0e155517\") " pod="kube-system/coredns-7c65d6cfc9-hx4lr" Jul 12 00:17:59.949404 kubelet[2460]: I0712 00:17:59.949367 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvzpr\" (UniqueName: \"kubernetes.io/projected/3e229b8b-7da0-4eac-b348-de0ddbefbe29-kube-api-access-cvzpr\") pod \"coredns-7c65d6cfc9-w8cl9\" (UID: \"3e229b8b-7da0-4eac-b348-de0ddbefbe29\") " pod="kube-system/coredns-7c65d6cfc9-w8cl9" Jul 12 00:17:59.949404 kubelet[2460]: I0712 00:17:59.949384 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07300eb5-c5ef-49e9-b87b-d18a0e155517-config-volume\") pod \"coredns-7c65d6cfc9-hx4lr\" (UID: \"07300eb5-c5ef-49e9-b87b-d18a0e155517\") " pod="kube-system/coredns-7c65d6cfc9-hx4lr" Jul 12 00:18:00.128437 kubelet[2460]: E0712 00:18:00.128389 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:00.130361 containerd[1438]: time="2025-07-12T00:18:00.130301066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hx4lr,Uid:07300eb5-c5ef-49e9-b87b-d18a0e155517,Namespace:kube-system,Attempt:0,}" Jul 12 00:18:00.137738 kubelet[2460]: E0712 00:18:00.137703 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:00.138553 containerd[1438]: time="2025-07-12T00:18:00.138454580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w8cl9,Uid:3e229b8b-7da0-4eac-b348-de0ddbefbe29,Namespace:kube-system,Attempt:0,}" Jul 12 00:18:00.597424 kubelet[2460]: E0712 00:18:00.597395 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:01.600914 kubelet[2460]: E0712 00:18:01.599148 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:01.878696 systemd-networkd[1385]: cilium_host: Link UP Jul 12 00:18:01.878810 systemd-networkd[1385]: cilium_net: Link UP Jul 12 00:18:01.878924 systemd-networkd[1385]: cilium_net: Gained carrier Jul 12 00:18:01.879032 systemd-networkd[1385]: cilium_host: Gained carrier Jul 12 00:18:01.889266 systemd-networkd[1385]: cilium_net: Gained IPv6LL Jul 12 00:18:01.972359 systemd-networkd[1385]: cilium_vxlan: Link UP Jul 12 00:18:01.972365 systemd-networkd[1385]: cilium_vxlan: Gained carrier Jul 12 00:18:02.277560 kernel: NET: Registered PF_ALG protocol family Jul 12 00:18:02.601307 kubelet[2460]: E0712 00:18:02.601256 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:02.622744 systemd-networkd[1385]: cilium_host: Gained IPv6LL Jul 12 00:18:02.879353 systemd-networkd[1385]: lxc_health: Link UP Jul 12 00:18:02.888669 systemd-networkd[1385]: lxc_health: Gained carrier Jul 12 00:18:03.305908 systemd-networkd[1385]: lxcefeb3f2e5326: Link UP Jul 12 00:18:03.309167 systemd-networkd[1385]: lxc60b958c4a0b0: Link UP Jul 12 00:18:03.317555 kernel: eth0: renamed from tmp069ac Jul 12 00:18:03.325683 kernel: eth0: renamed from tmpbb651 Jul 12 00:18:03.333376 systemd-networkd[1385]: lxcefeb3f2e5326: Gained carrier Jul 12 00:18:03.335284 systemd-networkd[1385]: lxc60b958c4a0b0: Gained carrier Jul 12 00:18:03.391683 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Jul 12 00:18:03.603566 kubelet[2460]: E0712 00:18:03.603426 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:04.352345 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jul 12 00:18:04.378897 kubelet[2460]: I0712 00:18:04.378836 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-skwbf" podStartSLOduration=9.806342349 podStartE2EDuration="19.378818684s" podCreationTimestamp="2025-07-12 00:17:45 +0000 UTC" firstStartedPulling="2025-07-12 00:17:46.468421892 +0000 UTC m=+7.044091339" lastFinishedPulling="2025-07-12 00:17:56.040898227 +0000 UTC m=+16.616567674" observedRunningTime="2025-07-12 00:18:00.620828016 +0000 UTC m=+21.196497463" watchObservedRunningTime="2025-07-12 00:18:04.378818684 +0000 UTC m=+24.954488131" Jul 12 00:18:04.605674 kubelet[2460]: E0712 00:18:04.605551 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:04.863675 systemd-networkd[1385]: lxcefeb3f2e5326: Gained IPv6LL Jul 12 00:18:05.310703 systemd-networkd[1385]: lxc60b958c4a0b0: Gained IPv6LL Jul 12 00:18:05.607051 kubelet[2460]: E0712 00:18:05.606935 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:07.108645 containerd[1438]: time="2025-07-12T00:18:07.108303837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:07.108645 containerd[1438]: time="2025-07-12T00:18:07.108360880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:07.108645 containerd[1438]: time="2025-07-12T00:18:07.108376681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:07.108645 containerd[1438]: time="2025-07-12T00:18:07.108448245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:07.120171 containerd[1438]: time="2025-07-12T00:18:07.119830704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:18:07.120171 containerd[1438]: time="2025-07-12T00:18:07.119895387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:18:07.120171 containerd[1438]: time="2025-07-12T00:18:07.119910868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:07.120171 containerd[1438]: time="2025-07-12T00:18:07.120094078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:18:07.149738 systemd[1]: Started cri-containerd-069acb30fadfde4356531ca4484b1726d0ac34922fc7bdecf0b1edd9a40325c9.scope - libcontainer container 069acb30fadfde4356531ca4484b1726d0ac34922fc7bdecf0b1edd9a40325c9. Jul 12 00:18:07.151388 systemd[1]: Started cri-containerd-bb6518e79786cbcbced596c4186bbfbdde55c786c2de4476006e1b50fa1c5636.scope - libcontainer container bb6518e79786cbcbced596c4186bbfbdde55c786c2de4476006e1b50fa1c5636. Jul 12 00:18:07.164365 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:07.165508 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:18:07.187347 containerd[1438]: time="2025-07-12T00:18:07.187278291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w8cl9,Uid:3e229b8b-7da0-4eac-b348-de0ddbefbe29,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb6518e79786cbcbced596c4186bbfbdde55c786c2de4476006e1b50fa1c5636\"" Jul 12 00:18:07.187933 containerd[1438]: time="2025-07-12T00:18:07.187904765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hx4lr,Uid:07300eb5-c5ef-49e9-b87b-d18a0e155517,Namespace:kube-system,Attempt:0,} returns sandbox id \"069acb30fadfde4356531ca4484b1726d0ac34922fc7bdecf0b1edd9a40325c9\"" Jul 12 00:18:07.188379 kubelet[2460]: E0712 00:18:07.188305 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:07.190548 kubelet[2460]: E0712 00:18:07.189834 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:07.190652 containerd[1438]: time="2025-07-12T00:18:07.190438623Z" level=info msg="CreateContainer within sandbox \"bb6518e79786cbcbced596c4186bbfbdde55c786c2de4476006e1b50fa1c5636\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:18:07.191627 containerd[1438]: time="2025-07-12T00:18:07.191447558Z" level=info msg="CreateContainer within sandbox \"069acb30fadfde4356531ca4484b1726d0ac34922fc7bdecf0b1edd9a40325c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:18:07.211681 containerd[1438]: time="2025-07-12T00:18:07.211624055Z" level=info msg="CreateContainer within sandbox \"bb6518e79786cbcbced596c4186bbfbdde55c786c2de4476006e1b50fa1c5636\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"39acf11ebb4fbf9ea32f1d1d3fc10b2b1b32a118cf13ae430d92fb9a14cf5692\"" Jul 12 00:18:07.212461 containerd[1438]: time="2025-07-12T00:18:07.212311933Z" level=info msg="StartContainer for \"39acf11ebb4fbf9ea32f1d1d3fc10b2b1b32a118cf13ae430d92fb9a14cf5692\"" Jul 12 00:18:07.215680 containerd[1438]: time="2025-07-12T00:18:07.215555549Z" level=info msg="CreateContainer within sandbox \"069acb30fadfde4356531ca4484b1726d0ac34922fc7bdecf0b1edd9a40325c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a23c5de19a4977d701081e4d76f3d3cbf99f6cba1bac5660e5e97169eb1d6635\"" Jul 12 00:18:07.216697 containerd[1438]: time="2025-07-12T00:18:07.216639688Z" level=info msg="StartContainer for \"a23c5de19a4977d701081e4d76f3d3cbf99f6cba1bac5660e5e97169eb1d6635\"" Jul 12 00:18:07.253759 systemd[1]: Started cri-containerd-39acf11ebb4fbf9ea32f1d1d3fc10b2b1b32a118cf13ae430d92fb9a14cf5692.scope - libcontainer container 39acf11ebb4fbf9ea32f1d1d3fc10b2b1b32a118cf13ae430d92fb9a14cf5692. Jul 12 00:18:07.255194 systemd[1]: Started cri-containerd-a23c5de19a4977d701081e4d76f3d3cbf99f6cba1bac5660e5e97169eb1d6635.scope - libcontainer container a23c5de19a4977d701081e4d76f3d3cbf99f6cba1bac5660e5e97169eb1d6635. Jul 12 00:18:07.284739 containerd[1438]: time="2025-07-12T00:18:07.284680748Z" level=info msg="StartContainer for \"a23c5de19a4977d701081e4d76f3d3cbf99f6cba1bac5660e5e97169eb1d6635\" returns successfully" Jul 12 00:18:07.293732 containerd[1438]: time="2025-07-12T00:18:07.293600033Z" level=info msg="StartContainer for \"39acf11ebb4fbf9ea32f1d1d3fc10b2b1b32a118cf13ae430d92fb9a14cf5692\" returns successfully" Jul 12 00:18:07.612348 kubelet[2460]: E0712 00:18:07.611939 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:07.614106 kubelet[2460]: E0712 00:18:07.614077 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:07.624861 kubelet[2460]: I0712 00:18:07.624805 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hx4lr" podStartSLOduration=21.624786201 podStartE2EDuration="21.624786201s" podCreationTimestamp="2025-07-12 00:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:18:07.624540028 +0000 UTC m=+28.200209475" watchObservedRunningTime="2025-07-12 00:18:07.624786201 +0000 UTC m=+28.200455648" Jul 12 00:18:07.665309 kubelet[2460]: I0712 00:18:07.665231 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w8cl9" podStartSLOduration=21.665215079 podStartE2EDuration="21.665215079s" podCreationTimestamp="2025-07-12 00:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:18:07.663913929 +0000 UTC m=+28.239583376" watchObservedRunningTime="2025-07-12 00:18:07.665215079 +0000 UTC m=+28.240884526" Jul 12 00:18:08.114823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687132352.mount: Deactivated successfully. Jul 12 00:18:08.619369 kubelet[2460]: E0712 00:18:08.619270 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:08.619947 kubelet[2460]: E0712 00:18:08.619502 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:09.618024 kubelet[2460]: E0712 00:18:09.617912 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:09.618264 kubelet[2460]: E0712 00:18:09.618244 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:18:10.386278 systemd[1]: Started sshd@7-10.0.0.83:22-10.0.0.1:55146.service - OpenSSH per-connection server daemon (10.0.0.1:55146). Jul 12 00:18:10.434589 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:10.436182 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:10.440601 systemd-logind[1423]: New session 8 of user core. Jul 12 00:18:10.446689 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:18:10.566662 sshd[3871]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:10.570628 systemd[1]: sshd@7-10.0.0.83:22-10.0.0.1:55146.service: Deactivated successfully. Jul 12 00:18:10.573109 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:18:10.573804 systemd-logind[1423]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:18:10.574644 systemd-logind[1423]: Removed session 8. Jul 12 00:18:15.583372 systemd[1]: Started sshd@8-10.0.0.83:22-10.0.0.1:44728.service - OpenSSH per-connection server daemon (10.0.0.1:44728). Jul 12 00:18:15.619622 sshd[3887]: Accepted publickey for core from 10.0.0.1 port 44728 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:15.620946 sshd[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:15.624743 systemd-logind[1423]: New session 9 of user core. Jul 12 00:18:15.632700 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:18:15.742129 sshd[3887]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:15.745862 systemd[1]: sshd@8-10.0.0.83:22-10.0.0.1:44728.service: Deactivated successfully. Jul 12 00:18:15.747500 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:18:15.748234 systemd-logind[1423]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:18:15.749399 systemd-logind[1423]: Removed session 9. Jul 12 00:18:20.753370 systemd[1]: Started sshd@9-10.0.0.83:22-10.0.0.1:44740.service - OpenSSH per-connection server daemon (10.0.0.1:44740). Jul 12 00:18:20.796916 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 44740 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:20.799602 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:20.805164 systemd-logind[1423]: New session 10 of user core. Jul 12 00:18:20.817706 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:18:20.952236 sshd[3905]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:20.957225 systemd[1]: sshd@9-10.0.0.83:22-10.0.0.1:44740.service: Deactivated successfully. Jul 12 00:18:20.962073 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:18:20.962860 systemd-logind[1423]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:18:20.964133 systemd-logind[1423]: Removed session 10. Jul 12 00:18:25.966759 systemd[1]: Started sshd@10-10.0.0.83:22-10.0.0.1:50402.service - OpenSSH per-connection server daemon (10.0.0.1:50402). Jul 12 00:18:26.005173 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 50402 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:26.006664 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:26.010874 systemd-logind[1423]: New session 11 of user core. Jul 12 00:18:26.023745 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:18:26.137100 sshd[3920]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:26.149255 systemd[1]: sshd@10-10.0.0.83:22-10.0.0.1:50402.service: Deactivated successfully. Jul 12 00:18:26.150937 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:18:26.152246 systemd-logind[1423]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:18:26.170802 systemd[1]: Started sshd@11-10.0.0.83:22-10.0.0.1:50414.service - OpenSSH per-connection server daemon (10.0.0.1:50414). Jul 12 00:18:26.172836 systemd-logind[1423]: Removed session 11. Jul 12 00:18:26.202244 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 50414 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:26.204171 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:26.208203 systemd-logind[1423]: New session 12 of user core. Jul 12 00:18:26.219735 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:18:26.377099 sshd[3936]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:26.387491 systemd[1]: sshd@11-10.0.0.83:22-10.0.0.1:50414.service: Deactivated successfully. Jul 12 00:18:26.393590 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:18:26.398089 systemd-logind[1423]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:18:26.407924 systemd[1]: Started sshd@12-10.0.0.83:22-10.0.0.1:50428.service - OpenSSH per-connection server daemon (10.0.0.1:50428). Jul 12 00:18:26.409441 systemd-logind[1423]: Removed session 12. Jul 12 00:18:26.441998 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 50428 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:26.443433 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:26.447261 systemd-logind[1423]: New session 13 of user core. Jul 12 00:18:26.462740 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:18:26.578161 sshd[3948]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:26.581268 systemd-logind[1423]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:18:26.581606 systemd[1]: sshd@12-10.0.0.83:22-10.0.0.1:50428.service: Deactivated successfully. Jul 12 00:18:26.583105 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:18:26.584053 systemd-logind[1423]: Removed session 13. Jul 12 00:18:31.589090 systemd[1]: Started sshd@13-10.0.0.83:22-10.0.0.1:50440.service - OpenSSH per-connection server daemon (10.0.0.1:50440). Jul 12 00:18:31.623110 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 50440 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:31.624303 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:31.627706 systemd-logind[1423]: New session 14 of user core. Jul 12 00:18:31.637726 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:18:31.743979 sshd[3963]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:31.747171 systemd[1]: sshd@13-10.0.0.83:22-10.0.0.1:50440.service: Deactivated successfully. Jul 12 00:18:31.750023 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:18:31.750676 systemd-logind[1423]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:18:31.751452 systemd-logind[1423]: Removed session 14. Jul 12 00:18:36.755092 systemd[1]: Started sshd@14-10.0.0.83:22-10.0.0.1:45300.service - OpenSSH per-connection server daemon (10.0.0.1:45300). Jul 12 00:18:36.791637 sshd[3977]: Accepted publickey for core from 10.0.0.1 port 45300 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:36.791195 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:36.795600 systemd-logind[1423]: New session 15 of user core. Jul 12 00:18:36.805970 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:18:36.932995 sshd[3977]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:36.947254 systemd[1]: sshd@14-10.0.0.83:22-10.0.0.1:45300.service: Deactivated successfully. Jul 12 00:18:36.949004 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:18:36.950424 systemd-logind[1423]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:18:36.960877 systemd[1]: Started sshd@15-10.0.0.83:22-10.0.0.1:45314.service - OpenSSH per-connection server daemon (10.0.0.1:45314). Jul 12 00:18:36.964611 systemd-logind[1423]: Removed session 15. Jul 12 00:18:36.996501 sshd[3992]: Accepted publickey for core from 10.0.0.1 port 45314 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:36.996306 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:37.002247 systemd-logind[1423]: New session 16 of user core. Jul 12 00:18:37.009687 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:18:37.250672 sshd[3992]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:37.262799 systemd[1]: sshd@15-10.0.0.83:22-10.0.0.1:45314.service: Deactivated successfully. Jul 12 00:18:37.265590 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:18:37.268573 systemd-logind[1423]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:18:37.281857 systemd[1]: Started sshd@16-10.0.0.83:22-10.0.0.1:45330.service - OpenSSH per-connection server daemon (10.0.0.1:45330). Jul 12 00:18:37.283032 systemd-logind[1423]: Removed session 16. Jul 12 00:18:37.315767 sshd[4005]: Accepted publickey for core from 10.0.0.1 port 45330 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:37.317085 sshd[4005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:37.321663 systemd-logind[1423]: New session 17 of user core. Jul 12 00:18:37.337669 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:18:38.804308 sshd[4005]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:38.814357 systemd[1]: sshd@16-10.0.0.83:22-10.0.0.1:45330.service: Deactivated successfully. Jul 12 00:18:38.816258 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:18:38.820566 systemd-logind[1423]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:18:38.828899 systemd[1]: Started sshd@17-10.0.0.83:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Jul 12 00:18:38.832047 systemd-logind[1423]: Removed session 17. Jul 12 00:18:38.867871 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:38.869374 sshd[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:38.873601 systemd-logind[1423]: New session 18 of user core. Jul 12 00:18:38.885737 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:18:39.114090 sshd[4044]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:39.123462 systemd[1]: sshd@17-10.0.0.83:22-10.0.0.1:45342.service: Deactivated successfully. Jul 12 00:18:39.125271 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:18:39.130085 systemd-logind[1423]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:18:39.136948 systemd[1]: Started sshd@18-10.0.0.83:22-10.0.0.1:45352.service - OpenSSH per-connection server daemon (10.0.0.1:45352). Jul 12 00:18:39.138203 systemd-logind[1423]: Removed session 18. Jul 12 00:18:39.169623 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 45352 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:39.171142 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:39.175148 systemd-logind[1423]: New session 19 of user core. Jul 12 00:18:39.185636 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:18:39.291765 sshd[4057]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:39.295366 systemd[1]: sshd@18-10.0.0.83:22-10.0.0.1:45352.service: Deactivated successfully. Jul 12 00:18:39.297155 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:18:39.297792 systemd-logind[1423]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:18:39.298677 systemd-logind[1423]: Removed session 19. Jul 12 00:18:44.303718 systemd[1]: Started sshd@19-10.0.0.83:22-10.0.0.1:51198.service - OpenSSH per-connection server daemon (10.0.0.1:51198). Jul 12 00:18:44.337013 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 51198 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:44.338433 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:44.344410 systemd-logind[1423]: New session 20 of user core. Jul 12 00:18:44.353114 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:18:44.470485 sshd[4076]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:44.473688 systemd[1]: sshd@19-10.0.0.83:22-10.0.0.1:51198.service: Deactivated successfully. Jul 12 00:18:44.475809 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:18:44.477289 systemd-logind[1423]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:18:44.478387 systemd-logind[1423]: Removed session 20. Jul 12 00:18:49.481085 systemd[1]: Started sshd@20-10.0.0.83:22-10.0.0.1:51210.service - OpenSSH per-connection server daemon (10.0.0.1:51210). Jul 12 00:18:49.515852 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 51210 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:49.517313 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:49.521061 systemd-logind[1423]: New session 21 of user core. Jul 12 00:18:49.536707 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:18:49.642591 sshd[4092]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:49.646186 systemd[1]: sshd@20-10.0.0.83:22-10.0.0.1:51210.service: Deactivated successfully. Jul 12 00:18:49.649547 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:18:49.650300 systemd-logind[1423]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:18:49.651185 systemd-logind[1423]: Removed session 21. Jul 12 00:18:54.657429 systemd[1]: Started sshd@21-10.0.0.83:22-10.0.0.1:42798.service - OpenSSH per-connection server daemon (10.0.0.1:42798). Jul 12 00:18:54.691920 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 42798 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:54.693246 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:54.696561 systemd-logind[1423]: New session 22 of user core. Jul 12 00:18:54.704835 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:18:54.807489 sshd[4107]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:54.816967 systemd[1]: sshd@21-10.0.0.83:22-10.0.0.1:42798.service: Deactivated successfully. Jul 12 00:18:54.818857 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:18:54.820317 systemd-logind[1423]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:18:54.821799 systemd[1]: Started sshd@22-10.0.0.83:22-10.0.0.1:42812.service - OpenSSH per-connection server daemon (10.0.0.1:42812). Jul 12 00:18:54.822597 systemd-logind[1423]: Removed session 22. Jul 12 00:18:54.856288 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 42812 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:54.857589 sshd[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:54.862579 systemd-logind[1423]: New session 23 of user core. Jul 12 00:18:54.867659 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:18:57.206777 containerd[1438]: time="2025-07-12T00:18:57.206727429Z" level=info msg="StopContainer for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" with timeout 30 (s)" Jul 12 00:18:57.207611 containerd[1438]: time="2025-07-12T00:18:57.207528038Z" level=info msg="Stop container \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" with signal terminated" Jul 12 00:18:57.223289 systemd[1]: cri-containerd-4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e.scope: Deactivated successfully. Jul 12 00:18:57.237601 containerd[1438]: time="2025-07-12T00:18:57.237557772Z" level=info msg="StopContainer for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" with timeout 2 (s)" Jul 12 00:18:57.240049 containerd[1438]: time="2025-07-12T00:18:57.239497994Z" level=info msg="Stop container \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" with signal terminated" Jul 12 00:18:57.240632 containerd[1438]: time="2025-07-12T00:18:57.240028040Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:18:57.246266 systemd-networkd[1385]: lxc_health: Link DOWN Jul 12 00:18:57.246273 systemd-networkd[1385]: lxc_health: Lost carrier Jul 12 00:18:57.258696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e-rootfs.mount: Deactivated successfully. Jul 12 00:18:57.269041 containerd[1438]: time="2025-07-12T00:18:57.268819920Z" level=info msg="shim disconnected" id=4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e namespace=k8s.io Jul 12 00:18:57.269041 containerd[1438]: time="2025-07-12T00:18:57.268884520Z" level=warning msg="cleaning up after shim disconnected" id=4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e namespace=k8s.io Jul 12 00:18:57.269041 containerd[1438]: time="2025-07-12T00:18:57.268895640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:57.269991 systemd[1]: cri-containerd-24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9.scope: Deactivated successfully. Jul 12 00:18:57.270270 systemd[1]: cri-containerd-24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9.scope: Consumed 6.768s CPU time. Jul 12 00:18:57.285403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9-rootfs.mount: Deactivated successfully. Jul 12 00:18:57.292707 containerd[1438]: time="2025-07-12T00:18:57.292645865Z" level=info msg="shim disconnected" id=24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9 namespace=k8s.io Jul 12 00:18:57.292963 containerd[1438]: time="2025-07-12T00:18:57.292760786Z" level=warning msg="cleaning up after shim disconnected" id=24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9 namespace=k8s.io Jul 12 00:18:57.292963 containerd[1438]: time="2025-07-12T00:18:57.292771466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:57.310284 containerd[1438]: time="2025-07-12T00:18:57.310106419Z" level=info msg="StopContainer for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" returns successfully" Jul 12 00:18:57.310895 containerd[1438]: time="2025-07-12T00:18:57.310868307Z" level=info msg="StopPodSandbox for \"8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2\"" Jul 12 00:18:57.310963 containerd[1438]: time="2025-07-12T00:18:57.310907588Z" level=info msg="Container to stop \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:57.311960 containerd[1438]: time="2025-07-12T00:18:57.311592155Z" level=info msg="StopContainer for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" returns successfully" Jul 12 00:18:57.311960 containerd[1438]: time="2025-07-12T00:18:57.311905439Z" level=info msg="StopPodSandbox for \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\"" Jul 12 00:18:57.311960 containerd[1438]: time="2025-07-12T00:18:57.311938839Z" level=info msg="Container to stop \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:57.311960 containerd[1438]: time="2025-07-12T00:18:57.311951799Z" level=info msg="Container to stop \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:57.311960 containerd[1438]: time="2025-07-12T00:18:57.311962239Z" level=info msg="Container to stop \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:57.312130 containerd[1438]: time="2025-07-12T00:18:57.311972119Z" level=info msg="Container to stop \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:57.312130 containerd[1438]: time="2025-07-12T00:18:57.311981559Z" level=info msg="Container to stop \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:18:57.313174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2-shm.mount: Deactivated successfully. Jul 12 00:18:57.315659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b-shm.mount: Deactivated successfully. Jul 12 00:18:57.317285 systemd[1]: cri-containerd-a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b.scope: Deactivated successfully. Jul 12 00:18:57.318931 systemd[1]: cri-containerd-8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2.scope: Deactivated successfully. Jul 12 00:18:57.340818 containerd[1438]: time="2025-07-12T00:18:57.340654638Z" level=info msg="shim disconnected" id=8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2 namespace=k8s.io Jul 12 00:18:57.340818 containerd[1438]: time="2025-07-12T00:18:57.340713839Z" level=warning msg="cleaning up after shim disconnected" id=8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2 namespace=k8s.io Jul 12 00:18:57.340818 containerd[1438]: time="2025-07-12T00:18:57.340724279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:57.340818 containerd[1438]: time="2025-07-12T00:18:57.340665158Z" level=info msg="shim disconnected" id=a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b namespace=k8s.io Jul 12 00:18:57.340818 containerd[1438]: time="2025-07-12T00:18:57.340810360Z" level=warning msg="cleaning up after shim disconnected" id=a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b namespace=k8s.io Jul 12 00:18:57.340818 containerd[1438]: time="2025-07-12T00:18:57.340818720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:18:57.356427 containerd[1438]: time="2025-07-12T00:18:57.356213051Z" level=info msg="TearDown network for sandbox \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" successfully" Jul 12 00:18:57.356427 containerd[1438]: time="2025-07-12T00:18:57.356249252Z" level=info msg="StopPodSandbox for \"a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b\" returns successfully" Jul 12 00:18:57.358224 containerd[1438]: time="2025-07-12T00:18:57.358179113Z" level=info msg="TearDown network for sandbox \"8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2\" successfully" Jul 12 00:18:57.358224 containerd[1438]: time="2025-07-12T00:18:57.358207993Z" level=info msg="StopPodSandbox for \"8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2\" returns successfully" Jul 12 00:18:57.529974 kubelet[2460]: I0712 00:18:57.529850 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-run\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.529974 kubelet[2460]: I0712 00:18:57.529897 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edb7190b-198e-4584-9006-49ea632f777a-clustermesh-secrets\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.529974 kubelet[2460]: I0712 00:18:57.529943 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edb7190b-198e-4584-9006-49ea632f777a-cilium-config-path\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.529974 kubelet[2460]: I0712 00:18:57.529961 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-etc-cni-netd\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.529974 kubelet[2460]: I0712 00:18:57.529978 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/699bece4-ebf8-4df7-8103-b3358eb38e0a-cilium-config-path\") pod \"699bece4-ebf8-4df7-8103-b3358eb38e0a\" (UID: \"699bece4-ebf8-4df7-8103-b3358eb38e0a\") " Jul 12 00:18:57.530434 kubelet[2460]: I0712 00:18:57.529995 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-kernel\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530434 kubelet[2460]: I0712 00:18:57.530012 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ljwb\" (UniqueName: \"kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-kube-api-access-8ljwb\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530434 kubelet[2460]: I0712 00:18:57.530027 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-lib-modules\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530434 kubelet[2460]: I0712 00:18:57.530042 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-net\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530434 kubelet[2460]: I0712 00:18:57.530059 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-hubble-tls\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530434 kubelet[2460]: I0712 00:18:57.530076 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-bpf-maps\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530631 kubelet[2460]: I0712 00:18:57.530093 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cni-path\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530631 kubelet[2460]: I0712 00:18:57.530107 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-xtables-lock\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530631 kubelet[2460]: I0712 00:18:57.530121 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-hostproc\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530631 kubelet[2460]: I0712 00:18:57.530138 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-cgroup\") pod \"edb7190b-198e-4584-9006-49ea632f777a\" (UID: \"edb7190b-198e-4584-9006-49ea632f777a\") " Jul 12 00:18:57.530631 kubelet[2460]: I0712 00:18:57.530154 2460 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7n49\" (UniqueName: \"kubernetes.io/projected/699bece4-ebf8-4df7-8103-b3358eb38e0a-kube-api-access-t7n49\") pod \"699bece4-ebf8-4df7-8103-b3358eb38e0a\" (UID: \"699bece4-ebf8-4df7-8103-b3358eb38e0a\") " Jul 12 00:18:57.535477 kubelet[2460]: I0712 00:18:57.535016 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535477 kubelet[2460]: I0712 00:18:57.535086 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-hostproc" (OuterVolumeSpecName: "hostproc") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535477 kubelet[2460]: I0712 00:18:57.535108 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535477 kubelet[2460]: I0712 00:18:57.535176 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cni-path" (OuterVolumeSpecName: "cni-path") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535477 kubelet[2460]: I0712 00:18:57.535257 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535720 kubelet[2460]: I0712 00:18:57.535276 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535720 kubelet[2460]: I0712 00:18:57.535293 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535720 kubelet[2460]: I0712 00:18:57.535306 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535720 kubelet[2460]: I0712 00:18:57.535468 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.535904 kubelet[2460]: I0712 00:18:57.535879 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 12 00:18:57.537725 kubelet[2460]: I0712 00:18:57.537680 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/699bece4-ebf8-4df7-8103-b3358eb38e0a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "699bece4-ebf8-4df7-8103-b3358eb38e0a" (UID: "699bece4-ebf8-4df7-8103-b3358eb38e0a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:18:57.538146 kubelet[2460]: I0712 00:18:57.538108 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edb7190b-198e-4584-9006-49ea632f777a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 12 00:18:57.538793 kubelet[2460]: I0712 00:18:57.538754 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/699bece4-ebf8-4df7-8103-b3358eb38e0a-kube-api-access-t7n49" (OuterVolumeSpecName: "kube-api-access-t7n49") pod "699bece4-ebf8-4df7-8103-b3358eb38e0a" (UID: "699bece4-ebf8-4df7-8103-b3358eb38e0a"). InnerVolumeSpecName "kube-api-access-t7n49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:18:57.538870 kubelet[2460]: I0712 00:18:57.538791 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:18:57.539252 kubelet[2460]: I0712 00:18:57.539167 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edb7190b-198e-4584-9006-49ea632f777a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 12 00:18:57.540383 kubelet[2460]: I0712 00:18:57.540330 2460 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-kube-api-access-8ljwb" (OuterVolumeSpecName: "kube-api-access-8ljwb") pod "edb7190b-198e-4584-9006-49ea632f777a" (UID: "edb7190b-198e-4584-9006-49ea632f777a"). InnerVolumeSpecName "kube-api-access-8ljwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 12 00:18:57.630801 kubelet[2460]: I0712 00:18:57.630747 2460 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.630801 kubelet[2460]: I0712 00:18:57.630787 2460 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ljwb\" (UniqueName: \"kubernetes.io/projected/edb7190b-198e-4584-9006-49ea632f777a-kube-api-access-8ljwb\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.630801 kubelet[2460]: I0712 00:18:57.630800 2460 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.630801 kubelet[2460]: I0712 00:18:57.630808 2460 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.630801 kubelet[2460]: I0712 00:18:57.630818 2460 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630826 2460 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630859 2460 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630868 2460 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630876 2460 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630884 2460 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7n49\" (UniqueName: \"kubernetes.io/projected/699bece4-ebf8-4df7-8103-b3358eb38e0a-kube-api-access-t7n49\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630892 2460 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630900 2460 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edb7190b-198e-4584-9006-49ea632f777a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631056 kubelet[2460]: I0712 00:18:57.630908 2460 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edb7190b-198e-4584-9006-49ea632f777a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631227 kubelet[2460]: I0712 00:18:57.630916 2460 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631227 kubelet[2460]: I0712 00:18:57.630924 2460 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/699bece4-ebf8-4df7-8103-b3358eb38e0a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.631227 kubelet[2460]: I0712 00:18:57.630932 2460 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edb7190b-198e-4584-9006-49ea632f777a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:18:57.710467 kubelet[2460]: I0712 00:18:57.710360 2460 scope.go:117] "RemoveContainer" containerID="24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9" Jul 12 00:18:57.712808 containerd[1438]: time="2025-07-12T00:18:57.712456292Z" level=info msg="RemoveContainer for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\"" Jul 12 00:18:57.717227 systemd[1]: Removed slice kubepods-burstable-podedb7190b_198e_4584_9006_49ea632f777a.slice - libcontainer container kubepods-burstable-podedb7190b_198e_4584_9006_49ea632f777a.slice. Jul 12 00:18:57.717327 systemd[1]: kubepods-burstable-podedb7190b_198e_4584_9006_49ea632f777a.slice: Consumed 6.940s CPU time. Jul 12 00:18:57.720422 systemd[1]: Removed slice kubepods-besteffort-pod699bece4_ebf8_4df7_8103_b3358eb38e0a.slice - libcontainer container kubepods-besteffort-pod699bece4_ebf8_4df7_8103_b3358eb38e0a.slice. Jul 12 00:18:57.725093 containerd[1438]: time="2025-07-12T00:18:57.724970071Z" level=info msg="RemoveContainer for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" returns successfully" Jul 12 00:18:57.727590 kubelet[2460]: I0712 00:18:57.727552 2460 scope.go:117] "RemoveContainer" containerID="087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194" Jul 12 00:18:57.728972 containerd[1438]: time="2025-07-12T00:18:57.728895075Z" level=info msg="RemoveContainer for \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\"" Jul 12 00:18:57.731409 containerd[1438]: time="2025-07-12T00:18:57.731346382Z" level=info msg="RemoveContainer for \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\" returns successfully" Jul 12 00:18:57.731628 kubelet[2460]: I0712 00:18:57.731582 2460 scope.go:117] "RemoveContainer" containerID="412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c" Jul 12 00:18:57.732699 containerd[1438]: time="2025-07-12T00:18:57.732596716Z" level=info msg="RemoveContainer for \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\"" Jul 12 00:18:57.735051 containerd[1438]: time="2025-07-12T00:18:57.734995302Z" level=info msg="RemoveContainer for \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\" returns successfully" Jul 12 00:18:57.735992 kubelet[2460]: I0712 00:18:57.735905 2460 scope.go:117] "RemoveContainer" containerID="173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1" Jul 12 00:18:57.737936 containerd[1438]: time="2025-07-12T00:18:57.737871694Z" level=info msg="RemoveContainer for \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\"" Jul 12 00:18:57.753249 containerd[1438]: time="2025-07-12T00:18:57.753168584Z" level=info msg="RemoveContainer for \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\" returns successfully" Jul 12 00:18:57.753602 kubelet[2460]: I0712 00:18:57.753529 2460 scope.go:117] "RemoveContainer" containerID="fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48" Jul 12 00:18:57.755976 containerd[1438]: time="2025-07-12T00:18:57.755930855Z" level=info msg="RemoveContainer for \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\"" Jul 12 00:18:57.758392 containerd[1438]: time="2025-07-12T00:18:57.758339642Z" level=info msg="RemoveContainer for \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\" returns successfully" Jul 12 00:18:57.758654 kubelet[2460]: I0712 00:18:57.758617 2460 scope.go:117] "RemoveContainer" containerID="24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9" Jul 12 00:18:57.758917 containerd[1438]: time="2025-07-12T00:18:57.758844127Z" level=error msg="ContainerStatus for \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\": not found" Jul 12 00:18:57.764586 kubelet[2460]: E0712 00:18:57.764549 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\": not found" containerID="24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9" Jul 12 00:18:57.764675 kubelet[2460]: I0712 00:18:57.764586 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9"} err="failed to get container status \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"24d67f98bee5e854d0e9ef7597c5d922d477bbc03e779f856a13769b4d9469a9\": not found" Jul 12 00:18:57.764675 kubelet[2460]: I0712 00:18:57.764664 2460 scope.go:117] "RemoveContainer" containerID="087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194" Jul 12 00:18:57.764951 containerd[1438]: time="2025-07-12T00:18:57.764855594Z" level=error msg="ContainerStatus for \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\": not found" Jul 12 00:18:57.765009 kubelet[2460]: E0712 00:18:57.764984 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\": not found" containerID="087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194" Jul 12 00:18:57.765050 kubelet[2460]: I0712 00:18:57.765008 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194"} err="failed to get container status \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\": rpc error: code = NotFound desc = an error occurred when try to find container \"087b29c13b1b40d9f0b1e1a17e0db6080160ffed5cc8b19ddd46a62724a3c194\": not found" Jul 12 00:18:57.767475 kubelet[2460]: I0712 00:18:57.765022 2460 scope.go:117] "RemoveContainer" containerID="412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c" Jul 12 00:18:57.768250 containerd[1438]: time="2025-07-12T00:18:57.768186231Z" level=error msg="ContainerStatus for \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\": not found" Jul 12 00:18:57.768545 kubelet[2460]: E0712 00:18:57.768399 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\": not found" containerID="412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c" Jul 12 00:18:57.768545 kubelet[2460]: I0712 00:18:57.768432 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c"} err="failed to get container status \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"412051e858cc7e822dc494262d5b306c7df7fc10a8a4147c73692a9b2c4b8e6c\": not found" Jul 12 00:18:57.768545 kubelet[2460]: I0712 00:18:57.768450 2460 scope.go:117] "RemoveContainer" containerID="173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1" Jul 12 00:18:57.768768 containerd[1438]: time="2025-07-12T00:18:57.768655237Z" level=error msg="ContainerStatus for \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\": not found" Jul 12 00:18:57.768813 kubelet[2460]: E0712 00:18:57.768792 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\": not found" containerID="173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1" Jul 12 00:18:57.768853 kubelet[2460]: I0712 00:18:57.768816 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1"} err="failed to get container status \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\": rpc error: code = NotFound desc = an error occurred when try to find container \"173f08962ff589c29b9158c60f1f7bdfcb443afd38898f680cd0b1e1968aecf1\": not found" Jul 12 00:18:57.768853 kubelet[2460]: I0712 00:18:57.768845 2460 scope.go:117] "RemoveContainer" containerID="fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48" Jul 12 00:18:57.769127 containerd[1438]: time="2025-07-12T00:18:57.769088401Z" level=error msg="ContainerStatus for \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\": not found" Jul 12 00:18:57.769232 kubelet[2460]: E0712 00:18:57.769198 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\": not found" containerID="fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48" Jul 12 00:18:57.769232 kubelet[2460]: I0712 00:18:57.769218 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48"} err="failed to get container status \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc5e9da556a48ab91403cfc8d64f5083fbceca9d0d204ae8dc952ed851e8af48\": not found" Jul 12 00:18:57.769381 kubelet[2460]: I0712 00:18:57.769236 2460 scope.go:117] "RemoveContainer" containerID="4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e" Jul 12 00:18:57.770258 containerd[1438]: time="2025-07-12T00:18:57.770216494Z" level=info msg="RemoveContainer for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\"" Jul 12 00:18:57.772576 containerd[1438]: time="2025-07-12T00:18:57.772502319Z" level=info msg="RemoveContainer for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" returns successfully" Jul 12 00:18:57.773236 kubelet[2460]: I0712 00:18:57.772748 2460 scope.go:117] "RemoveContainer" containerID="4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e" Jul 12 00:18:57.773315 containerd[1438]: time="2025-07-12T00:18:57.773081526Z" level=error msg="ContainerStatus for \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\": not found" Jul 12 00:18:57.794094 kubelet[2460]: E0712 00:18:57.794059 2460 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\": not found" containerID="4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e" Jul 12 00:18:57.794196 kubelet[2460]: I0712 00:18:57.794098 2460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e"} err="failed to get container status \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bb52b7fdfd740ef7c1f9208742f8f4c3135582db49ef1fde8b69c6e43d55b1e\": not found" Jul 12 00:18:58.214298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e7012d4250dd5c7d35ec4fbc0aa8ca628404d8e980223916fd1d18e8b74e6b2-rootfs.mount: Deactivated successfully. Jul 12 00:18:58.214394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a941735a4f7cc3e17df4521ae691d90432924bdc9c95bd875d897cdb2f98b24b-rootfs.mount: Deactivated successfully. Jul 12 00:18:58.214444 systemd[1]: var-lib-kubelet-pods-699bece4\x2debf8\x2d4df7\x2d8103\x2db3358eb38e0a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7n49.mount: Deactivated successfully. Jul 12 00:18:58.214500 systemd[1]: var-lib-kubelet-pods-edb7190b\x2d198e\x2d4584\x2d9006\x2d49ea632f777a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8ljwb.mount: Deactivated successfully. Jul 12 00:18:58.214582 systemd[1]: var-lib-kubelet-pods-edb7190b\x2d198e\x2d4584\x2d9006\x2d49ea632f777a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:18:58.214632 systemd[1]: var-lib-kubelet-pods-edb7190b\x2d198e\x2d4584\x2d9006\x2d49ea632f777a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:18:59.151376 sshd[4122]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:59.158119 systemd[1]: sshd@22-10.0.0.83:22-10.0.0.1:42812.service: Deactivated successfully. Jul 12 00:18:59.159720 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:18:59.161598 systemd[1]: session-23.scope: Consumed 1.662s CPU time. Jul 12 00:18:59.162868 systemd-logind[1423]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:18:59.168796 systemd[1]: Started sshd@23-10.0.0.83:22-10.0.0.1:42820.service - OpenSSH per-connection server daemon (10.0.0.1:42820). Jul 12 00:18:59.170624 systemd-logind[1423]: Removed session 23. Jul 12 00:18:59.202283 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 42820 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:18:59.203494 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:18:59.206962 systemd-logind[1423]: New session 24 of user core. Jul 12 00:18:59.220665 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:18:59.520588 kubelet[2460]: I0712 00:18:59.518599 2460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="699bece4-ebf8-4df7-8103-b3358eb38e0a" path="/var/lib/kubelet/pods/699bece4-ebf8-4df7-8103-b3358eb38e0a/volumes" Jul 12 00:18:59.520588 kubelet[2460]: I0712 00:18:59.519225 2460 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb7190b-198e-4584-9006-49ea632f777a" path="/var/lib/kubelet/pods/edb7190b-198e-4584-9006-49ea632f777a/volumes" Jul 12 00:18:59.573140 kubelet[2460]: E0712 00:18:59.573101 2460 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:18:59.935456 sshd[4280]: pam_unix(sshd:session): session closed for user core Jul 12 00:18:59.944219 systemd[1]: sshd@23-10.0.0.83:22-10.0.0.1:42820.service: Deactivated successfully. Jul 12 00:18:59.949612 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:18:59.952424 kubelet[2460]: E0712 00:18:59.952387 2460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edb7190b-198e-4584-9006-49ea632f777a" containerName="apply-sysctl-overwrites" Jul 12 00:18:59.952424 kubelet[2460]: E0712 00:18:59.952414 2460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edb7190b-198e-4584-9006-49ea632f777a" containerName="cilium-agent" Jul 12 00:18:59.952424 kubelet[2460]: E0712 00:18:59.952423 2460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edb7190b-198e-4584-9006-49ea632f777a" containerName="mount-cgroup" Jul 12 00:18:59.952424 kubelet[2460]: E0712 00:18:59.952429 2460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="699bece4-ebf8-4df7-8103-b3358eb38e0a" containerName="cilium-operator" Jul 12 00:18:59.952424 kubelet[2460]: E0712 00:18:59.952434 2460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edb7190b-198e-4584-9006-49ea632f777a" containerName="mount-bpf-fs" Jul 12 00:18:59.952672 kubelet[2460]: E0712 00:18:59.952440 2460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edb7190b-198e-4584-9006-49ea632f777a" containerName="clean-cilium-state" Jul 12 00:18:59.952672 kubelet[2460]: I0712 00:18:59.952469 2460 memory_manager.go:354] "RemoveStaleState removing state" podUID="699bece4-ebf8-4df7-8103-b3358eb38e0a" containerName="cilium-operator" Jul 12 00:18:59.952672 kubelet[2460]: I0712 00:18:59.952476 2460 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb7190b-198e-4584-9006-49ea632f777a" containerName="cilium-agent" Jul 12 00:18:59.954873 systemd-logind[1423]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:18:59.965903 systemd[1]: Started sshd@24-10.0.0.83:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Jul 12 00:18:59.973359 systemd-logind[1423]: Removed session 24. Jul 12 00:18:59.977804 systemd[1]: Created slice kubepods-burstable-pod116775cb_f38d_4e72_9a45_d5925d5861d0.slice - libcontainer container kubepods-burstable-pod116775cb_f38d_4e72_9a45_d5925d5861d0.slice. Jul 12 00:19:00.013298 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:19:00.017189 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:19:00.022270 systemd-logind[1423]: New session 25 of user core. Jul 12 00:19:00.026784 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:19:00.078055 sshd[4293]: pam_unix(sshd:session): session closed for user core Jul 12 00:19:00.089100 systemd[1]: sshd@24-10.0.0.83:22-10.0.0.1:42826.service: Deactivated successfully. Jul 12 00:19:00.092135 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:19:00.093640 systemd-logind[1423]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:19:00.094709 systemd[1]: Started sshd@25-10.0.0.83:22-10.0.0.1:42830.service - OpenSSH per-connection server daemon (10.0.0.1:42830). Jul 12 00:19:00.095567 systemd-logind[1423]: Removed session 25. Jul 12 00:19:00.129309 sshd[4301]: Accepted publickey for core from 10.0.0.1 port 42830 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:19:00.130579 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:19:00.134501 systemd-logind[1423]: New session 26 of user core. Jul 12 00:19:00.145958 kubelet[2460]: I0712 00:19:00.145615 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/116775cb-f38d-4e72-9a45-d5925d5861d0-clustermesh-secrets\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.145958 kubelet[2460]: I0712 00:19:00.145652 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-cni-path\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.145958 kubelet[2460]: I0712 00:19:00.145673 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-lib-modules\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.145958 kubelet[2460]: I0712 00:19:00.145689 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-xtables-lock\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.145958 kubelet[2460]: I0712 00:19:00.145703 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/116775cb-f38d-4e72-9a45-d5925d5861d0-cilium-config-path\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.145958 kubelet[2460]: I0712 00:19:00.145718 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-cilium-cgroup\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146158 kubelet[2460]: I0712 00:19:00.145731 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-host-proc-sys-kernel\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146158 kubelet[2460]: I0712 00:19:00.145745 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/116775cb-f38d-4e72-9a45-d5925d5861d0-hubble-tls\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146158 kubelet[2460]: I0712 00:19:00.145759 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-etc-cni-netd\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146158 kubelet[2460]: I0712 00:19:00.145777 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-host-proc-sys-net\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146158 kubelet[2460]: I0712 00:19:00.145793 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbzq\" (UniqueName: \"kubernetes.io/projected/116775cb-f38d-4e72-9a45-d5925d5861d0-kube-api-access-qnbzq\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146158 kubelet[2460]: I0712 00:19:00.145809 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-hostproc\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146277 kubelet[2460]: I0712 00:19:00.145825 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-cilium-run\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146277 kubelet[2460]: I0712 00:19:00.145842 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/116775cb-f38d-4e72-9a45-d5925d5861d0-bpf-maps\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146277 kubelet[2460]: I0712 00:19:00.145865 2460 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/116775cb-f38d-4e72-9a45-d5925d5861d0-cilium-ipsec-secrets\") pod \"cilium-n6pf4\" (UID: \"116775cb-f38d-4e72-9a45-d5925d5861d0\") " pod="kube-system/cilium-n6pf4" Jul 12 00:19:00.146476 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:19:00.287159 kubelet[2460]: E0712 00:19:00.287023 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:00.288878 containerd[1438]: time="2025-07-12T00:19:00.288825582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6pf4,Uid:116775cb-f38d-4e72-9a45-d5925d5861d0,Namespace:kube-system,Attempt:0,}" Jul 12 00:19:00.305870 containerd[1438]: time="2025-07-12T00:19:00.305762954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:19:00.305870 containerd[1438]: time="2025-07-12T00:19:00.305818874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:19:00.305870 containerd[1438]: time="2025-07-12T00:19:00.305841434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:19:00.306118 containerd[1438]: time="2025-07-12T00:19:00.305939195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:19:00.328768 systemd[1]: Started cri-containerd-242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e.scope - libcontainer container 242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e. Jul 12 00:19:00.349495 containerd[1438]: time="2025-07-12T00:19:00.349283913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n6pf4,Uid:116775cb-f38d-4e72-9a45-d5925d5861d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\"" Jul 12 00:19:00.350259 kubelet[2460]: E0712 00:19:00.350150 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:00.354084 containerd[1438]: time="2025-07-12T00:19:00.354047042Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:19:00.364084 containerd[1438]: time="2025-07-12T00:19:00.364038503Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823\"" Jul 12 00:19:00.366496 containerd[1438]: time="2025-07-12T00:19:00.366467487Z" level=info msg="StartContainer for \"cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823\"" Jul 12 00:19:00.391730 systemd[1]: Started cri-containerd-cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823.scope - libcontainer container cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823. Jul 12 00:19:00.414087 containerd[1438]: time="2025-07-12T00:19:00.414033408Z" level=info msg="StartContainer for \"cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823\" returns successfully" Jul 12 00:19:00.438283 systemd[1]: cri-containerd-cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823.scope: Deactivated successfully. Jul 12 00:19:00.484538 containerd[1438]: time="2025-07-12T00:19:00.484429359Z" level=info msg="shim disconnected" id=cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823 namespace=k8s.io Jul 12 00:19:00.484538 containerd[1438]: time="2025-07-12T00:19:00.484483520Z" level=warning msg="cleaning up after shim disconnected" id=cd1029fb42688cb52960d4de8728091a3fa1768feea3364b78e069f966624823 namespace=k8s.io Jul 12 00:19:00.484538 containerd[1438]: time="2025-07-12T00:19:00.484493000Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:19:00.721048 kubelet[2460]: E0712 00:19:00.721001 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:00.723731 containerd[1438]: time="2025-07-12T00:19:00.723688338Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:19:00.735507 containerd[1438]: time="2025-07-12T00:19:00.735455457Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f\"" Jul 12 00:19:00.735974 containerd[1438]: time="2025-07-12T00:19:00.735954582Z" level=info msg="StartContainer for \"20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f\"" Jul 12 00:19:00.763717 systemd[1]: Started cri-containerd-20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f.scope - libcontainer container 20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f. Jul 12 00:19:00.785964 containerd[1438]: time="2025-07-12T00:19:00.785848446Z" level=info msg="StartContainer for \"20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f\" returns successfully" Jul 12 00:19:00.794602 systemd[1]: cri-containerd-20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f.scope: Deactivated successfully. Jul 12 00:19:00.817736 containerd[1438]: time="2025-07-12T00:19:00.817660928Z" level=info msg="shim disconnected" id=20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f namespace=k8s.io Jul 12 00:19:00.817736 containerd[1438]: time="2025-07-12T00:19:00.817734128Z" level=warning msg="cleaning up after shim disconnected" id=20a4fb3bb37465c0f8c233b65427388c2bb9a75bb485129bbacc4c6b874f5d8f namespace=k8s.io Jul 12 00:19:00.817944 containerd[1438]: time="2025-07-12T00:19:00.817743168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:19:01.502322 kubelet[2460]: I0712 00:19:01.502251 2460 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:19:01Z","lastTransitionTime":"2025-07-12T00:19:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:19:01.723733 kubelet[2460]: E0712 00:19:01.723703 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:01.727952 containerd[1438]: time="2025-07-12T00:19:01.726699208Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:19:01.742571 containerd[1438]: time="2025-07-12T00:19:01.741744236Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082\"" Jul 12 00:19:01.745043 containerd[1438]: time="2025-07-12T00:19:01.744996908Z" level=info msg="StartContainer for \"b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082\"" Jul 12 00:19:01.775708 systemd[1]: Started cri-containerd-b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082.scope - libcontainer container b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082. Jul 12 00:19:01.799170 systemd[1]: cri-containerd-b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082.scope: Deactivated successfully. Jul 12 00:19:01.799805 containerd[1438]: time="2025-07-12T00:19:01.799749364Z" level=info msg="StartContainer for \"b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082\" returns successfully" Jul 12 00:19:01.826058 containerd[1438]: time="2025-07-12T00:19:01.825984781Z" level=info msg="shim disconnected" id=b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082 namespace=k8s.io Jul 12 00:19:01.826058 containerd[1438]: time="2025-07-12T00:19:01.826060101Z" level=warning msg="cleaning up after shim disconnected" id=b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082 namespace=k8s.io Jul 12 00:19:01.826250 containerd[1438]: time="2025-07-12T00:19:01.826070981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:19:02.251322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0ff3e017630dba331ad96b72a8e0b1e17744dd62c0c2ca0e03228a21bddc082-rootfs.mount: Deactivated successfully. Jul 12 00:19:02.728991 kubelet[2460]: E0712 00:19:02.728752 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:02.733371 containerd[1438]: time="2025-07-12T00:19:02.733324361Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:19:02.744942 containerd[1438]: time="2025-07-12T00:19:02.744896910Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99\"" Jul 12 00:19:02.747113 containerd[1438]: time="2025-07-12T00:19:02.746588287Z" level=info msg="StartContainer for \"602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99\"" Jul 12 00:19:02.773686 systemd[1]: Started cri-containerd-602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99.scope - libcontainer container 602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99. Jul 12 00:19:02.792731 systemd[1]: cri-containerd-602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99.scope: Deactivated successfully. Jul 12 00:19:02.797440 containerd[1438]: time="2025-07-12T00:19:02.797377848Z" level=info msg="StartContainer for \"602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99\" returns successfully" Jul 12 00:19:02.816806 containerd[1438]: time="2025-07-12T00:19:02.816743992Z" level=info msg="shim disconnected" id=602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99 namespace=k8s.io Jul 12 00:19:02.816806 containerd[1438]: time="2025-07-12T00:19:02.816799233Z" level=warning msg="cleaning up after shim disconnected" id=602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99 namespace=k8s.io Jul 12 00:19:02.816806 containerd[1438]: time="2025-07-12T00:19:02.816808593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:19:02.829147 containerd[1438]: time="2025-07-12T00:19:02.828664425Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:19:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 12 00:19:03.251274 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-602d3c435b8cc4552aaa33eb9055d9c815061727ae816769895f06c289ed9c99-rootfs.mount: Deactivated successfully. Jul 12 00:19:03.733289 kubelet[2460]: E0712 00:19:03.731734 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:03.735276 containerd[1438]: time="2025-07-12T00:19:03.735016845Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:19:03.751242 containerd[1438]: time="2025-07-12T00:19:03.751105113Z" level=info msg="CreateContainer within sandbox \"242110edd3e0317ab681bc71930bcb5f40c9175883ca60195f56bf82aedd282e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"15ee99382e35aed6ff5fd4ef38108829e0b0724d2fd30ee368286874439d0598\"" Jul 12 00:19:03.752591 containerd[1438]: time="2025-07-12T00:19:03.751627678Z" level=info msg="StartContainer for \"15ee99382e35aed6ff5fd4ef38108829e0b0724d2fd30ee368286874439d0598\"" Jul 12 00:19:03.777694 systemd[1]: Started cri-containerd-15ee99382e35aed6ff5fd4ef38108829e0b0724d2fd30ee368286874439d0598.scope - libcontainer container 15ee99382e35aed6ff5fd4ef38108829e0b0724d2fd30ee368286874439d0598. Jul 12 00:19:03.805105 containerd[1438]: time="2025-07-12T00:19:03.805045808Z" level=info msg="StartContainer for \"15ee99382e35aed6ff5fd4ef38108829e0b0724d2fd30ee368286874439d0598\" returns successfully" Jul 12 00:19:04.079551 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 00:19:04.516447 kubelet[2460]: E0712 00:19:04.516281 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:04.740171 kubelet[2460]: E0712 00:19:04.738948 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:04.758658 kubelet[2460]: I0712 00:19:04.758059 2460 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n6pf4" podStartSLOduration=5.758041588 podStartE2EDuration="5.758041588s" podCreationTimestamp="2025-07-12 00:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:19:04.757885987 +0000 UTC m=+85.333555434" watchObservedRunningTime="2025-07-12 00:19:04.758041588 +0000 UTC m=+85.333711035" Jul 12 00:19:06.289761 kubelet[2460]: E0712 00:19:06.288474 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:06.935941 systemd-networkd[1385]: lxc_health: Link UP Jul 12 00:19:06.939870 systemd-networkd[1385]: lxc_health: Gained carrier Jul 12 00:19:07.517239 kubelet[2460]: E0712 00:19:07.516878 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:08.094659 systemd-networkd[1385]: lxc_health: Gained IPv6LL Jul 12 00:19:08.289717 kubelet[2460]: E0712 00:19:08.289671 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:08.743833 kubelet[2460]: E0712 00:19:08.743735 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:09.745603 kubelet[2460]: E0712 00:19:09.745575 2460 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:12.943948 sshd[4301]: pam_unix(sshd:session): session closed for user core Jul 12 00:19:12.948032 systemd[1]: sshd@25-10.0.0.83:22-10.0.0.1:42830.service: Deactivated successfully. Jul 12 00:19:12.949959 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:19:12.950595 systemd-logind[1423]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:19:12.951488 systemd-logind[1423]: Removed session 26.