Jul 7 05:45:10.923169 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 7 05:45:10.923189 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Sun Jul 6 22:28:26 -00 2025 Jul 7 05:45:10.923199 kernel: KASLR enabled Jul 7 05:45:10.923204 kernel: efi: EFI v2.7 by EDK II Jul 7 05:45:10.923210 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 7 05:45:10.923216 kernel: random: crng init done Jul 7 05:45:10.923223 kernel: ACPI: Early table checksum verification disabled Jul 7 05:45:10.923229 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 7 05:45:10.923235 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 7 05:45:10.923242 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923248 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923254 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923260 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923266 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923273 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923281 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923288 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923294 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 05:45:10.923300 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 7 05:45:10.923306 kernel: NUMA: Failed to initialise from firmware Jul 7 05:45:10.923313 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 05:45:10.923319 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 7 05:45:10.923325 kernel: Zone ranges: Jul 7 05:45:10.923331 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 05:45:10.923338 kernel: DMA32 empty Jul 7 05:45:10.923345 kernel: Normal empty Jul 7 05:45:10.923351 kernel: Movable zone start for each node Jul 7 05:45:10.923357 kernel: Early memory node ranges Jul 7 05:45:10.923364 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 7 05:45:10.923370 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 7 05:45:10.923376 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 7 05:45:10.923383 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 7 05:45:10.923389 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 7 05:45:10.923395 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 7 05:45:10.923401 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 7 05:45:10.923408 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 7 05:45:10.923414 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 7 05:45:10.923422 kernel: psci: probing for conduit method from ACPI. Jul 7 05:45:10.923428 kernel: psci: PSCIv1.1 detected in firmware. Jul 7 05:45:10.923435 kernel: psci: Using standard PSCI v0.2 function IDs Jul 7 05:45:10.923444 kernel: psci: Trusted OS migration not required Jul 7 05:45:10.923451 kernel: psci: SMC Calling Convention v1.1 Jul 7 05:45:10.923457 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 7 05:45:10.923466 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 7 05:45:10.923472 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 7 05:45:10.923479 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 7 05:45:10.923486 kernel: Detected PIPT I-cache on CPU0 Jul 7 05:45:10.923493 kernel: CPU features: detected: GIC system register CPU interface Jul 7 05:45:10.923500 kernel: CPU features: detected: Hardware dirty bit management Jul 7 05:45:10.923506 kernel: CPU features: detected: Spectre-v4 Jul 7 05:45:10.923513 kernel: CPU features: detected: Spectre-BHB Jul 7 05:45:10.923520 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 7 05:45:10.923526 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 7 05:45:10.923535 kernel: CPU features: detected: ARM erratum 1418040 Jul 7 05:45:10.923541 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 7 05:45:10.923548 kernel: alternatives: applying boot alternatives Jul 7 05:45:10.923556 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:45:10.923563 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 05:45:10.923570 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 05:45:10.923576 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 05:45:10.923583 kernel: Fallback order for Node 0: 0 Jul 7 05:45:10.923590 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 7 05:45:10.923597 kernel: Policy zone: DMA Jul 7 05:45:10.923603 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 05:45:10.923611 kernel: software IO TLB: area num 4. Jul 7 05:45:10.923623 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 7 05:45:10.923630 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 7 05:45:10.923636 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 05:45:10.923643 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 05:45:10.923650 kernel: rcu: RCU event tracing is enabled. Jul 7 05:45:10.923657 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 05:45:10.923664 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 05:45:10.923671 kernel: Tracing variant of Tasks RCU enabled. Jul 7 05:45:10.923678 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 05:45:10.923684 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 05:45:10.923716 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 7 05:45:10.923726 kernel: GICv3: 256 SPIs implemented Jul 7 05:45:10.923733 kernel: GICv3: 0 Extended SPIs implemented Jul 7 05:45:10.923740 kernel: Root IRQ handler: gic_handle_irq Jul 7 05:45:10.923746 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 7 05:45:10.923753 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 7 05:45:10.923760 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 7 05:45:10.923771 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 7 05:45:10.923778 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 7 05:45:10.923785 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 7 05:45:10.923792 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 7 05:45:10.923799 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 05:45:10.923808 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:45:10.923815 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 7 05:45:10.923822 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 7 05:45:10.923829 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 7 05:45:10.923836 kernel: arm-pv: using stolen time PV Jul 7 05:45:10.923843 kernel: Console: colour dummy device 80x25 Jul 7 05:45:10.923850 kernel: ACPI: Core revision 20230628 Jul 7 05:45:10.923857 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 7 05:45:10.923864 kernel: pid_max: default: 32768 minimum: 301 Jul 7 05:45:10.923871 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 05:45:10.923879 kernel: landlock: Up and running. Jul 7 05:45:10.923886 kernel: SELinux: Initializing. Jul 7 05:45:10.923893 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:45:10.923900 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 05:45:10.923907 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 05:45:10.923914 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 05:45:10.923921 kernel: rcu: Hierarchical SRCU implementation. Jul 7 05:45:10.923928 kernel: rcu: Max phase no-delay instances is 400. Jul 7 05:45:10.923935 kernel: Platform MSI: ITS@0x8080000 domain created Jul 7 05:45:10.923943 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 7 05:45:10.923950 kernel: Remapping and enabling EFI services. Jul 7 05:45:10.923957 kernel: smp: Bringing up secondary CPUs ... Jul 7 05:45:10.923963 kernel: Detected PIPT I-cache on CPU1 Jul 7 05:45:10.923970 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 7 05:45:10.923978 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 7 05:45:10.923985 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:45:10.923991 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 7 05:45:10.923998 kernel: Detected PIPT I-cache on CPU2 Jul 7 05:45:10.924005 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 7 05:45:10.924014 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 7 05:45:10.924021 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:45:10.924032 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 7 05:45:10.924041 kernel: Detected PIPT I-cache on CPU3 Jul 7 05:45:10.924048 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 7 05:45:10.924056 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 7 05:45:10.924063 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 7 05:45:10.924070 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 7 05:45:10.924077 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 05:45:10.924086 kernel: SMP: Total of 4 processors activated. Jul 7 05:45:10.924093 kernel: CPU features: detected: 32-bit EL0 Support Jul 7 05:45:10.924100 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 7 05:45:10.924107 kernel: CPU features: detected: Common not Private translations Jul 7 05:45:10.924115 kernel: CPU features: detected: CRC32 instructions Jul 7 05:45:10.924122 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 7 05:45:10.924129 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 7 05:45:10.924137 kernel: CPU features: detected: LSE atomic instructions Jul 7 05:45:10.924146 kernel: CPU features: detected: Privileged Access Never Jul 7 05:45:10.924153 kernel: CPU features: detected: RAS Extension Support Jul 7 05:45:10.924160 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 7 05:45:10.924167 kernel: CPU: All CPU(s) started at EL1 Jul 7 05:45:10.924175 kernel: alternatives: applying system-wide alternatives Jul 7 05:45:10.924182 kernel: devtmpfs: initialized Jul 7 05:45:10.924189 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 05:45:10.924196 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 05:45:10.924204 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 05:45:10.924212 kernel: SMBIOS 3.0.0 present. Jul 7 05:45:10.924219 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 7 05:45:10.924227 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 05:45:10.924234 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 7 05:45:10.924241 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 7 05:45:10.924249 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 7 05:45:10.924256 kernel: audit: initializing netlink subsys (disabled) Jul 7 05:45:10.924263 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Jul 7 05:45:10.924271 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 05:45:10.924279 kernel: cpuidle: using governor menu Jul 7 05:45:10.924287 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 7 05:45:10.924294 kernel: ASID allocator initialised with 32768 entries Jul 7 05:45:10.924302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 05:45:10.924309 kernel: Serial: AMBA PL011 UART driver Jul 7 05:45:10.924317 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 7 05:45:10.924324 kernel: Modules: 0 pages in range for non-PLT usage Jul 7 05:45:10.924332 kernel: Modules: 509008 pages in range for PLT usage Jul 7 05:45:10.924339 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 05:45:10.924348 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 05:45:10.924356 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 7 05:45:10.924363 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 7 05:45:10.924370 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 05:45:10.924377 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 05:45:10.924385 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 7 05:45:10.924392 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 7 05:45:10.924399 kernel: ACPI: Added _OSI(Module Device) Jul 7 05:45:10.924406 kernel: ACPI: Added _OSI(Processor Device) Jul 7 05:45:10.924415 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 05:45:10.924423 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 05:45:10.924430 kernel: ACPI: Interpreter enabled Jul 7 05:45:10.924437 kernel: ACPI: Using GIC for interrupt routing Jul 7 05:45:10.924444 kernel: ACPI: MCFG table detected, 1 entries Jul 7 05:45:10.924451 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 7 05:45:10.924459 kernel: printk: console [ttyAMA0] enabled Jul 7 05:45:10.924466 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 05:45:10.924605 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 05:45:10.924787 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 05:45:10.924861 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 05:45:10.924925 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 7 05:45:10.924990 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 7 05:45:10.925000 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 7 05:45:10.925007 kernel: PCI host bridge to bus 0000:00 Jul 7 05:45:10.925077 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 7 05:45:10.925139 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 7 05:45:10.925197 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 7 05:45:10.925254 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 05:45:10.925333 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 7 05:45:10.925408 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 05:45:10.925475 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 7 05:45:10.925549 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 7 05:45:10.925617 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 05:45:10.925707 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 7 05:45:10.925778 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 7 05:45:10.925862 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 7 05:45:10.925925 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 7 05:45:10.925985 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 7 05:45:10.926046 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 7 05:45:10.926056 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 7 05:45:10.926064 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 7 05:45:10.926071 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 7 05:45:10.926079 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 7 05:45:10.926086 kernel: iommu: Default domain type: Translated Jul 7 05:45:10.926093 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 7 05:45:10.926101 kernel: efivars: Registered efivars operations Jul 7 05:45:10.926108 kernel: vgaarb: loaded Jul 7 05:45:10.926118 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 7 05:45:10.926125 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 05:45:10.926132 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 05:45:10.926140 kernel: pnp: PnP ACPI init Jul 7 05:45:10.926228 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 7 05:45:10.926239 kernel: pnp: PnP ACPI: found 1 devices Jul 7 05:45:10.926246 kernel: NET: Registered PF_INET protocol family Jul 7 05:45:10.926253 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 05:45:10.926263 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 05:45:10.926271 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 05:45:10.926278 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 05:45:10.926285 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 05:45:10.926293 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 05:45:10.926300 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:45:10.926307 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 05:45:10.926315 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 05:45:10.926322 kernel: PCI: CLS 0 bytes, default 64 Jul 7 05:45:10.926331 kernel: kvm [1]: HYP mode not available Jul 7 05:45:10.926339 kernel: Initialise system trusted keyrings Jul 7 05:45:10.926346 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 05:45:10.926353 kernel: Key type asymmetric registered Jul 7 05:45:10.926360 kernel: Asymmetric key parser 'x509' registered Jul 7 05:45:10.926368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 05:45:10.926376 kernel: io scheduler mq-deadline registered Jul 7 05:45:10.926383 kernel: io scheduler kyber registered Jul 7 05:45:10.926390 kernel: io scheduler bfq registered Jul 7 05:45:10.926399 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 7 05:45:10.926407 kernel: ACPI: button: Power Button [PWRB] Jul 7 05:45:10.926415 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 7 05:45:10.926481 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 7 05:45:10.926491 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 05:45:10.926498 kernel: thunder_xcv, ver 1.0 Jul 7 05:45:10.926505 kernel: thunder_bgx, ver 1.0 Jul 7 05:45:10.926512 kernel: nicpf, ver 1.0 Jul 7 05:45:10.926520 kernel: nicvf, ver 1.0 Jul 7 05:45:10.926594 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 7 05:45:10.926657 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-07T05:45:10 UTC (1751867110) Jul 7 05:45:10.926667 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 05:45:10.926675 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 7 05:45:10.926683 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 7 05:45:10.926753 kernel: watchdog: Hard watchdog permanently disabled Jul 7 05:45:10.926763 kernel: NET: Registered PF_INET6 protocol family Jul 7 05:45:10.926770 kernel: Segment Routing with IPv6 Jul 7 05:45:10.926781 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 05:45:10.926788 kernel: NET: Registered PF_PACKET protocol family Jul 7 05:45:10.926798 kernel: Key type dns_resolver registered Jul 7 05:45:10.926806 kernel: registered taskstats version 1 Jul 7 05:45:10.926814 kernel: Loading compiled-in X.509 certificates Jul 7 05:45:10.926821 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 238b9dc1e5bb098e9decff566778e6505241ab94' Jul 7 05:45:10.926829 kernel: Key type .fscrypt registered Jul 7 05:45:10.926836 kernel: Key type fscrypt-provisioning registered Jul 7 05:45:10.926844 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 05:45:10.926852 kernel: ima: Allocated hash algorithm: sha1 Jul 7 05:45:10.926860 kernel: ima: No architecture policies found Jul 7 05:45:10.926868 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 7 05:45:10.926876 kernel: clk: Disabling unused clocks Jul 7 05:45:10.926883 kernel: Freeing unused kernel memory: 39424K Jul 7 05:45:10.926890 kernel: Run /init as init process Jul 7 05:45:10.926897 kernel: with arguments: Jul 7 05:45:10.926905 kernel: /init Jul 7 05:45:10.926912 kernel: with environment: Jul 7 05:45:10.926920 kernel: HOME=/ Jul 7 05:45:10.926927 kernel: TERM=linux Jul 7 05:45:10.926934 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 05:45:10.926944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:45:10.926953 systemd[1]: Detected virtualization kvm. Jul 7 05:45:10.926962 systemd[1]: Detected architecture arm64. Jul 7 05:45:10.926969 systemd[1]: Running in initrd. Jul 7 05:45:10.926977 systemd[1]: No hostname configured, using default hostname. Jul 7 05:45:10.926987 systemd[1]: Hostname set to . Jul 7 05:45:10.926995 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:45:10.927004 systemd[1]: Queued start job for default target initrd.target. Jul 7 05:45:10.927012 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:45:10.927020 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:45:10.927028 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 05:45:10.927036 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:45:10.927045 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 05:45:10.927054 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 05:45:10.927064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 05:45:10.927072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 05:45:10.927081 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:45:10.927089 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:45:10.927097 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:45:10.927107 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:45:10.927115 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:45:10.927123 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:45:10.927132 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:45:10.927140 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:45:10.927148 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 05:45:10.927156 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 05:45:10.927165 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:45:10.927173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:45:10.927183 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:45:10.927191 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:45:10.927199 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 05:45:10.927207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:45:10.927215 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 05:45:10.927228 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 05:45:10.927237 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:45:10.927245 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:45:10.927253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:45:10.927263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 05:45:10.927271 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:45:10.927279 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 05:45:10.927288 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 05:45:10.927298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:45:10.927325 systemd-journald[238]: Collecting audit messages is disabled. Jul 7 05:45:10.927348 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:45:10.927358 systemd-journald[238]: Journal started Jul 7 05:45:10.927378 systemd-journald[238]: Runtime Journal (/run/log/journal/23bf603ffb654f2fb25b925b2b53ef85) is 5.9M, max 47.3M, 41.4M free. Jul 7 05:45:10.914015 systemd-modules-load[239]: Inserted module 'overlay' Jul 7 05:45:10.930471 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 05:45:10.930498 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 05:45:10.932715 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:45:10.932746 kernel: Bridge firewalling registered Jul 7 05:45:10.933126 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 7 05:45:10.934178 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:45:10.936778 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:45:10.938867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:45:10.942125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:45:10.947994 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:45:10.951940 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 05:45:10.953632 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:45:10.955264 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:45:10.956854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:45:10.963066 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:45:10.968041 dracut-cmdline[270]: dracut-dracut-053 Jul 7 05:45:10.971665 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d8ee5af37c0fd8dad02b585c18ea1a7b66b80110546cbe726b93dd7a9fbe678b Jul 7 05:45:11.006059 systemd-resolved[283]: Positive Trust Anchors: Jul 7 05:45:11.006077 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:45:11.006111 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:45:11.012152 systemd-resolved[283]: Defaulting to hostname 'linux'. Jul 7 05:45:11.013419 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:45:11.015512 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:45:11.052725 kernel: SCSI subsystem initialized Jul 7 05:45:11.056717 kernel: Loading iSCSI transport class v2.0-870. Jul 7 05:45:11.063729 kernel: iscsi: registered transport (tcp) Jul 7 05:45:11.078734 kernel: iscsi: registered transport (qla4xxx) Jul 7 05:45:11.078785 kernel: QLogic iSCSI HBA Driver Jul 7 05:45:11.125034 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 05:45:11.140881 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 05:45:11.159473 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 05:45:11.159533 kernel: device-mapper: uevent: version 1.0.3 Jul 7 05:45:11.159544 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 05:45:11.207727 kernel: raid6: neonx8 gen() 15628 MB/s Jul 7 05:45:11.224716 kernel: raid6: neonx4 gen() 15546 MB/s Jul 7 05:45:11.241721 kernel: raid6: neonx2 gen() 13152 MB/s Jul 7 05:45:11.258714 kernel: raid6: neonx1 gen() 10428 MB/s Jul 7 05:45:11.275711 kernel: raid6: int64x8 gen() 6906 MB/s Jul 7 05:45:11.292713 kernel: raid6: int64x4 gen() 7299 MB/s Jul 7 05:45:11.309712 kernel: raid6: int64x2 gen() 6112 MB/s Jul 7 05:45:11.326713 kernel: raid6: int64x1 gen() 5046 MB/s Jul 7 05:45:11.326726 kernel: raid6: using algorithm neonx8 gen() 15628 MB/s Jul 7 05:45:11.343718 kernel: raid6: .... xor() 11868 MB/s, rmw enabled Jul 7 05:45:11.343736 kernel: raid6: using neon recovery algorithm Jul 7 05:45:11.348889 kernel: xor: measuring software checksum speed Jul 7 05:45:11.348903 kernel: 8regs : 19759 MB/sec Jul 7 05:45:11.349920 kernel: 32regs : 19669 MB/sec Jul 7 05:45:11.349932 kernel: arm64_neon : 25552 MB/sec Jul 7 05:45:11.349941 kernel: xor: using function: arm64_neon (25552 MB/sec) Jul 7 05:45:11.400718 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 05:45:11.412259 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:45:11.426873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:45:11.438567 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 7 05:45:11.441742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:45:11.461882 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 05:45:11.474176 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 7 05:45:11.500426 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:45:11.515866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:45:11.554756 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:45:11.561904 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 05:45:11.574805 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 05:45:11.576076 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:45:11.577183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:45:11.578660 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:45:11.586860 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 05:45:11.597652 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:45:11.602722 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 7 05:45:11.602912 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 05:45:11.605694 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:45:11.605822 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:45:11.612237 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 05:45:11.612254 kernel: GPT:9289727 != 19775487 Jul 7 05:45:11.612269 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 05:45:11.612278 kernel: GPT:9289727 != 19775487 Jul 7 05:45:11.612287 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 05:45:11.612296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:45:11.612289 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:45:11.613193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:45:11.613319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:45:11.615643 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:45:11.626718 kernel: BTRFS: device fsid 8b9ce65a-b4d6-4744-987c-133e7f159d2d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (506) Jul 7 05:45:11.626756 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (515) Jul 7 05:45:11.629940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:45:11.640298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:45:11.645064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 05:45:11.652279 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 05:45:11.656677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 05:45:11.657536 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 05:45:11.663087 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 05:45:11.677878 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 05:45:11.679300 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 05:45:11.683769 disk-uuid[549]: Primary Header is updated. Jul 7 05:45:11.683769 disk-uuid[549]: Secondary Entries is updated. Jul 7 05:45:11.683769 disk-uuid[549]: Secondary Header is updated. Jul 7 05:45:11.687849 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:45:11.698122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:45:11.700583 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:45:12.699729 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 05:45:12.700724 disk-uuid[550]: The operation has completed successfully. Jul 7 05:45:12.721847 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 05:45:12.721963 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 05:45:12.741877 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 05:45:12.744650 sh[573]: Success Jul 7 05:45:12.758735 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 7 05:45:12.785903 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 05:45:12.795982 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 05:45:12.798045 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 05:45:12.806157 kernel: BTRFS info (device dm-0): first mount of filesystem 8b9ce65a-b4d6-4744-987c-133e7f159d2d Jul 7 05:45:12.806197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:45:12.809010 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 05:45:12.809025 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 05:45:12.809036 kernel: BTRFS info (device dm-0): using free space tree Jul 7 05:45:12.813482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 05:45:12.814294 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 05:45:12.823879 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 05:45:12.825108 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 05:45:12.833038 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:45:12.833073 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:45:12.833714 kernel: BTRFS info (device vda6): using free space tree Jul 7 05:45:12.835723 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 05:45:12.842901 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 05:45:12.844047 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:45:12.849755 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 05:45:12.855873 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 05:45:12.924411 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:45:12.933892 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:45:12.959189 systemd-networkd[762]: lo: Link UP Jul 7 05:45:12.959202 systemd-networkd[762]: lo: Gained carrier Jul 7 05:45:12.959878 systemd-networkd[762]: Enumeration completed Jul 7 05:45:12.960156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:45:12.960365 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:45:12.962148 ignition[667]: Ignition 2.19.0 Jul 7 05:45:12.960368 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:45:12.962154 ignition[667]: Stage: fetch-offline Jul 7 05:45:12.961012 systemd-networkd[762]: eth0: Link UP Jul 7 05:45:12.962185 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:45:12.961015 systemd-networkd[762]: eth0: Gained carrier Jul 7 05:45:12.962193 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:45:12.961021 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:45:12.962343 ignition[667]: parsed url from cmdline: "" Jul 7 05:45:12.961429 systemd[1]: Reached target network.target - Network. Jul 7 05:45:12.962346 ignition[667]: no config URL provided Jul 7 05:45:12.962350 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 05:45:12.962357 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jul 7 05:45:12.962378 ignition[667]: op(1): [started] loading QEMU firmware config module Jul 7 05:45:12.962383 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 05:45:12.971653 ignition[667]: op(1): [finished] loading QEMU firmware config module Jul 7 05:45:12.983757 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 05:45:13.011009 ignition[667]: parsing config with SHA512: 0e64554fabab354111d94e880bfc52b53507f46a4cae5a8fd538007691ef4f704c6d6865a44cb040936ffd5cef34144bb6b01044745b35b9e390b47a4d0fd139 Jul 7 05:45:13.016660 unknown[667]: fetched base config from "system" Jul 7 05:45:13.016672 unknown[667]: fetched user config from "qemu" Jul 7 05:45:13.017124 ignition[667]: fetch-offline: fetch-offline passed Jul 7 05:45:13.017185 ignition[667]: Ignition finished successfully Jul 7 05:45:13.019092 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:45:13.020082 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 05:45:13.028853 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 05:45:13.038951 ignition[768]: Ignition 2.19.0 Jul 7 05:45:13.038961 ignition[768]: Stage: kargs Jul 7 05:45:13.039119 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:45:13.039127 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:45:13.039979 ignition[768]: kargs: kargs passed Jul 7 05:45:13.040019 ignition[768]: Ignition finished successfully Jul 7 05:45:13.041859 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 05:45:13.048890 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 05:45:13.057513 ignition[776]: Ignition 2.19.0 Jul 7 05:45:13.057523 ignition[776]: Stage: disks Jul 7 05:45:13.057679 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 7 05:45:13.057715 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:45:13.058525 ignition[776]: disks: disks passed Jul 7 05:45:13.061072 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 05:45:13.058566 ignition[776]: Ignition finished successfully Jul 7 05:45:13.061993 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 05:45:13.063254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 05:45:13.064514 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:45:13.065942 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:45:13.067346 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:45:13.079874 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 05:45:13.090146 systemd-fsck[786]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 05:45:13.093678 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 05:45:13.102863 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 05:45:13.142720 kernel: EXT4-fs (vda9): mounted filesystem bea371b7-1069-4e98-84b2-bf5b94f934f3 r/w with ordered data mode. Quota mode: none. Jul 7 05:45:13.143303 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 05:45:13.144307 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 05:45:13.152784 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:45:13.154218 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 05:45:13.155250 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 05:45:13.155286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 05:45:13.160386 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (794) Jul 7 05:45:13.155306 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:45:13.163265 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:45:13.163284 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:45:13.163293 kernel: BTRFS info (device vda6): using free space tree Jul 7 05:45:13.161569 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 05:45:13.164472 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 05:45:13.167725 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 05:45:13.168926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:45:13.204625 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 05:45:13.207750 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jul 7 05:45:13.211120 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 05:45:13.214745 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 05:45:13.282173 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 05:45:13.293815 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 05:45:13.295138 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 05:45:13.299733 kernel: BTRFS info (device vda6): last unmount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:45:13.315237 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 05:45:13.317235 ignition[906]: INFO : Ignition 2.19.0 Jul 7 05:45:13.317235 ignition[906]: INFO : Stage: mount Jul 7 05:45:13.318377 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:45:13.318377 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:45:13.318377 ignition[906]: INFO : mount: mount passed Jul 7 05:45:13.318377 ignition[906]: INFO : Ignition finished successfully Jul 7 05:45:13.319645 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 05:45:13.328841 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 05:45:13.805819 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 05:45:13.821889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 05:45:13.828126 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (921) Jul 7 05:45:13.828156 kernel: BTRFS info (device vda6): first mount of filesystem 1c5c26db-4e47-4c5b-afcc-cdf6cfde8d6e Jul 7 05:45:13.828167 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 7 05:45:13.829712 kernel: BTRFS info (device vda6): using free space tree Jul 7 05:45:13.831719 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 05:45:13.832393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 05:45:13.849388 ignition[938]: INFO : Ignition 2.19.0 Jul 7 05:45:13.849388 ignition[938]: INFO : Stage: files Jul 7 05:45:13.850666 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:45:13.850666 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:45:13.850666 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Jul 7 05:45:13.853259 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 05:45:13.853259 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 05:45:13.855222 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 05:45:13.855222 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 05:45:13.855222 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 05:45:13.855222 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:45:13.855222 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 7 05:45:13.853811 unknown[938]: wrote ssh authorized keys file for user: core Jul 7 05:45:13.906586 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 05:45:14.060916 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 7 05:45:14.060916 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 05:45:14.063818 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 7 05:45:14.445436 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 7 05:45:14.582482 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:45:14.584161 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 7 05:45:14.676815 systemd-networkd[762]: eth0: Gained IPv6LL Jul 7 05:45:15.125257 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 7 05:45:15.765892 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 7 05:45:15.765892 ignition[938]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 7 05:45:15.771434 ignition[938]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 7 05:45:15.772779 ignition[938]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 05:45:15.801915 ignition[938]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 05:45:15.806478 ignition[938]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 05:45:15.807739 ignition[938]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 05:45:15.807739 ignition[938]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 7 05:45:15.807739 ignition[938]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 05:45:15.807739 ignition[938]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:45:15.807739 ignition[938]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 05:45:15.807739 ignition[938]: INFO : files: files passed Jul 7 05:45:15.807739 ignition[938]: INFO : Ignition finished successfully Jul 7 05:45:15.809369 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 05:45:15.818868 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 05:45:15.824481 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 05:45:15.830657 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 05:45:15.830787 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 05:45:15.834064 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 05:45:15.837553 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:45:15.837553 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:45:15.839827 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 05:45:15.842937 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:45:15.844083 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 05:45:15.855020 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 05:45:15.876467 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 05:45:15.876582 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 05:45:15.878299 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 05:45:15.879520 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 05:45:15.881002 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 05:45:15.881869 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 05:45:15.900821 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:45:15.910898 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 05:45:15.918898 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:45:15.919846 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:45:15.921343 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 05:45:15.922615 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 05:45:15.922763 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 05:45:15.924525 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 05:45:15.925977 systemd[1]: Stopped target basic.target - Basic System. Jul 7 05:45:15.927171 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 05:45:15.928405 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 05:45:15.929778 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 05:45:15.931228 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 05:45:15.932543 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 05:45:15.933996 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 05:45:15.935464 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 05:45:15.936837 systemd[1]: Stopped target swap.target - Swaps. Jul 7 05:45:15.938056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 05:45:15.938188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 05:45:15.939944 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:45:15.941395 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:45:15.942819 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 05:45:15.944414 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:45:15.945403 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 05:45:15.945523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 05:45:15.947647 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 05:45:15.947785 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 05:45:15.949244 systemd[1]: Stopped target paths.target - Path Units. Jul 7 05:45:15.950326 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 05:45:15.953766 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:45:15.954769 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 05:45:15.956365 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 05:45:15.957504 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 05:45:15.957595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 05:45:15.958734 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 05:45:15.958818 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 05:45:15.959939 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 05:45:15.960050 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 05:45:15.961344 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 05:45:15.961442 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 05:45:15.973987 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 05:45:15.974662 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 05:45:15.974803 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:45:15.977056 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 05:45:15.978353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 05:45:15.978492 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:45:15.979883 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 05:45:15.979975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 05:45:15.986418 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 05:45:15.986764 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 05:45:15.992928 ignition[994]: INFO : Ignition 2.19.0 Jul 7 05:45:15.992928 ignition[994]: INFO : Stage: umount Jul 7 05:45:15.994405 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 05:45:15.994405 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 05:45:15.996213 ignition[994]: INFO : umount: umount passed Jul 7 05:45:15.996213 ignition[994]: INFO : Ignition finished successfully Jul 7 05:45:15.998082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 05:45:15.998585 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 05:45:15.999771 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 05:45:16.001359 systemd[1]: Stopped target network.target - Network. Jul 7 05:45:16.002111 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 05:45:16.002179 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 05:45:16.003395 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 05:45:16.003432 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 05:45:16.004722 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 05:45:16.004767 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 05:45:16.006032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 05:45:16.006073 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 05:45:16.007518 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 05:45:16.008846 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 05:45:16.010813 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 05:45:16.010904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 05:45:16.013209 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 05:45:16.013302 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 05:45:16.014775 systemd-networkd[762]: eth0: DHCPv6 lease lost Jul 7 05:45:16.016863 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 05:45:16.016976 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 05:45:16.018801 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 05:45:16.018836 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:45:16.028344 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 05:45:16.029308 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 05:45:16.029384 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 05:45:16.032330 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:45:16.035527 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 05:45:16.035760 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 05:45:16.040518 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:45:16.040580 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:45:16.042997 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 05:45:16.043077 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 05:45:16.046572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 05:45:16.046684 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:45:16.049102 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 05:45:16.049283 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:45:16.051074 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 05:45:16.051186 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 05:45:16.053182 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 05:45:16.053250 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 05:45:16.055090 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 05:45:16.055127 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:45:16.056579 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 05:45:16.056637 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 05:45:16.058720 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 05:45:16.058770 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 05:45:16.060948 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 05:45:16.060996 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 05:45:16.074263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 05:45:16.075446 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 05:45:16.075518 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:45:16.077520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 05:45:16.077572 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:45:16.080453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 05:45:16.080535 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 05:45:16.083592 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 05:45:16.089083 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 05:45:16.100172 systemd[1]: Switching root. Jul 7 05:45:16.130714 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 7 05:45:16.130772 systemd-journald[238]: Journal stopped Jul 7 05:45:16.905535 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 05:45:16.905590 kernel: SELinux: policy capability open_perms=1 Jul 7 05:45:16.905602 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 05:45:16.905611 kernel: SELinux: policy capability always_check_network=0 Jul 7 05:45:16.905621 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 05:45:16.905630 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 05:45:16.905640 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 05:45:16.905653 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 05:45:16.905665 kernel: audit: type=1403 audit(1751867116.328:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 05:45:16.905694 systemd[1]: Successfully loaded SELinux policy in 31.923ms. Jul 7 05:45:16.905729 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.561ms. Jul 7 05:45:16.905741 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 05:45:16.905752 systemd[1]: Detected virtualization kvm. Jul 7 05:45:16.905763 systemd[1]: Detected architecture arm64. Jul 7 05:45:16.905774 systemd[1]: Detected first boot. Jul 7 05:45:16.905786 systemd[1]: Initializing machine ID from VM UUID. Jul 7 05:45:16.905797 zram_generator::config[1041]: No configuration found. Jul 7 05:45:16.905808 systemd[1]: Populated /etc with preset unit settings. Jul 7 05:45:16.905823 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 05:45:16.905833 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 05:45:16.905844 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 05:45:16.905855 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 05:45:16.905866 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 05:45:16.905879 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 05:45:16.905890 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 05:45:16.905900 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 05:45:16.905911 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 05:45:16.905922 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 05:45:16.905932 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 05:45:16.905945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 05:45:16.905955 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 05:45:16.905966 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 05:45:16.905978 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 05:45:16.905989 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 05:45:16.906000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 05:45:16.906011 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 7 05:45:16.906021 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 05:45:16.906032 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 05:45:16.906042 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 05:45:16.906053 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 05:45:16.906066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 05:45:16.906079 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 05:45:16.906089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 05:45:16.906100 systemd[1]: Reached target slices.target - Slice Units. Jul 7 05:45:16.906110 systemd[1]: Reached target swap.target - Swaps. Jul 7 05:45:16.906121 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 05:45:16.906131 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 05:45:16.906142 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 05:45:16.906153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 05:45:16.906165 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 05:45:16.906177 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 05:45:16.906188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 05:45:16.906199 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 05:45:16.906209 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 05:45:16.906220 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 05:45:16.906231 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 05:45:16.906241 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 05:45:16.906252 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 05:45:16.906264 systemd[1]: Reached target machines.target - Containers. Jul 7 05:45:16.906275 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 05:45:16.906285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:45:16.906296 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 05:45:16.906307 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 05:45:16.906317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:45:16.906328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:45:16.906338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:45:16.906352 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 05:45:16.906364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:45:16.906376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 05:45:16.906386 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 05:45:16.906398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 05:45:16.906409 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 05:45:16.906419 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 05:45:16.906429 kernel: loop: module loaded Jul 7 05:45:16.906439 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 05:45:16.906451 kernel: fuse: init (API version 7.39) Jul 7 05:45:16.906461 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 05:45:16.906471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 05:45:16.906482 kernel: ACPI: bus type drm_connector registered Jul 7 05:45:16.906492 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 05:45:16.906502 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 05:45:16.906513 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 05:45:16.906523 systemd[1]: Stopped verity-setup.service. Jul 7 05:45:16.906534 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 05:45:16.906546 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 05:45:16.906557 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 05:45:16.906567 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 05:45:16.906578 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 05:45:16.906605 systemd-journald[1101]: Collecting audit messages is disabled. Jul 7 05:45:16.906634 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 05:45:16.906645 systemd-journald[1101]: Journal started Jul 7 05:45:16.906667 systemd-journald[1101]: Runtime Journal (/run/log/journal/23bf603ffb654f2fb25b925b2b53ef85) is 5.9M, max 47.3M, 41.4M free. Jul 7 05:45:16.908532 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 05:45:16.715925 systemd[1]: Queued start job for default target multi-user.target. Jul 7 05:45:16.731843 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 05:45:16.732215 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 05:45:16.910735 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 05:45:16.912453 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 05:45:16.912599 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 05:45:16.913887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:45:16.914023 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:45:16.915140 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:45:16.915284 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:45:16.916466 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:45:16.916604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:45:16.917904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 05:45:16.918039 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 05:45:16.919264 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:45:16.919386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:45:16.920718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 05:45:16.922008 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 05:45:16.923345 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 05:45:16.927148 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 05:45:16.937290 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 05:45:16.945821 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 05:45:16.947721 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 05:45:16.948515 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 05:45:16.948543 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 05:45:16.950238 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 05:45:16.952156 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 05:45:16.953943 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 05:45:16.954805 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:45:16.956220 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 05:45:16.957913 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 05:45:16.958752 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:45:16.961868 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 05:45:16.962771 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:45:16.964920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:45:16.965824 systemd-journald[1101]: Time spent on flushing to /var/log/journal/23bf603ffb654f2fb25b925b2b53ef85 is 20.586ms for 856 entries. Jul 7 05:45:16.965824 systemd-journald[1101]: System Journal (/var/log/journal/23bf603ffb654f2fb25b925b2b53ef85) is 8.0M, max 195.6M, 187.6M free. Jul 7 05:45:17.007039 systemd-journald[1101]: Received client request to flush runtime journal. Jul 7 05:45:17.007082 kernel: loop0: detected capacity change from 0 to 203944 Jul 7 05:45:16.970992 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 05:45:16.974784 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 05:45:16.976996 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 05:45:16.979747 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 05:45:16.980817 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 05:45:16.981941 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 05:45:16.987014 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 05:45:16.989754 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 05:45:16.990803 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 05:45:16.993839 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 05:45:17.010930 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:45:17.012275 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 05:45:17.017659 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 05:45:17.025060 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 05:45:17.027371 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 05:45:17.029726 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 05:45:17.035895 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 05:45:17.037125 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 05:45:17.054189 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 7 05:45:17.054517 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 7 05:45:17.055746 kernel: loop1: detected capacity change from 0 to 114432 Jul 7 05:45:17.058917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 05:45:17.089895 kernel: loop2: detected capacity change from 0 to 114328 Jul 7 05:45:17.121734 kernel: loop3: detected capacity change from 0 to 203944 Jul 7 05:45:17.126718 kernel: loop4: detected capacity change from 0 to 114432 Jul 7 05:45:17.130721 kernel: loop5: detected capacity change from 0 to 114328 Jul 7 05:45:17.133546 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 05:45:17.133973 (sd-merge)[1178]: Merged extensions into '/usr'. Jul 7 05:45:17.138115 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 05:45:17.138135 systemd[1]: Reloading... Jul 7 05:45:17.201740 zram_generator::config[1207]: No configuration found. Jul 7 05:45:17.236641 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 05:45:17.286319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:45:17.322018 systemd[1]: Reloading finished in 183 ms. Jul 7 05:45:17.352073 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 05:45:17.353577 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 05:45:17.365043 systemd[1]: Starting ensure-sysext.service... Jul 7 05:45:17.366925 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 05:45:17.377537 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jul 7 05:45:17.377556 systemd[1]: Reloading... Jul 7 05:45:17.391719 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 05:45:17.391979 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 05:45:17.392599 systemd-tmpfiles[1239]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 05:45:17.392838 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 7 05:45:17.392895 systemd-tmpfiles[1239]: ACLs are not supported, ignoring. Jul 7 05:45:17.404662 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:45:17.404683 systemd-tmpfiles[1239]: Skipping /boot Jul 7 05:45:17.412226 systemd-tmpfiles[1239]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 05:45:17.412242 systemd-tmpfiles[1239]: Skipping /boot Jul 7 05:45:17.434728 zram_generator::config[1262]: No configuration found. Jul 7 05:45:17.524000 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:45:17.559813 systemd[1]: Reloading finished in 181 ms. Jul 7 05:45:17.578735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 05:45:17.593182 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 05:45:17.601611 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:45:17.603989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 05:45:17.605992 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 05:45:17.610976 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 05:45:17.618648 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 05:45:17.620963 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 05:45:17.624369 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:45:17.625972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:45:17.629427 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:45:17.633963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:45:17.634792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:45:17.635536 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 05:45:17.637084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:45:17.637206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:45:17.638545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:45:17.638680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:45:17.657826 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 05:45:17.659288 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:45:17.659429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:45:17.665319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:45:17.668055 systemd-udevd[1313]: Using default interface naming scheme 'v255'. Jul 7 05:45:17.672912 augenrules[1331]: No rules Jul 7 05:45:17.673005 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:45:17.675004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:45:17.676998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:45:17.677915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:45:17.679997 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 05:45:17.683213 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 05:45:17.684032 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:45:17.686771 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:45:17.688546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 05:45:17.690339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:45:17.690488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:45:17.691876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:45:17.692013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:45:17.693478 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:45:17.693616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:45:17.696400 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 05:45:17.700261 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 05:45:17.712512 systemd[1]: Finished ensure-sysext.service. Jul 7 05:45:17.717821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 05:45:17.722956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 05:45:17.728881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 05:45:17.732889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 05:45:17.736049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 05:45:17.739411 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 05:45:17.742442 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 05:45:17.749169 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 05:45:17.751309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 05:45:17.751657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 05:45:17.752887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 05:45:17.753053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 05:45:17.754213 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 05:45:17.754335 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 05:45:17.755550 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 05:45:17.755714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 05:45:17.761687 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 7 05:45:17.765406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 05:45:17.768080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 05:45:17.768252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 05:45:17.770486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 05:45:17.781724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1350) Jul 7 05:45:17.824336 systemd-networkd[1372]: lo: Link UP Jul 7 05:45:17.824345 systemd-networkd[1372]: lo: Gained carrier Jul 7 05:45:17.825152 systemd-networkd[1372]: Enumeration completed Jul 7 05:45:17.825273 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 05:45:17.829830 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:45:17.829843 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 05:45:17.831836 systemd-networkd[1372]: eth0: Link UP Jul 7 05:45:17.831846 systemd-networkd[1372]: eth0: Gained carrier Jul 7 05:45:17.831860 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 05:45:17.838898 systemd-resolved[1306]: Positive Trust Anchors: Jul 7 05:45:17.838936 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 05:45:17.839267 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 05:45:17.839357 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 05:45:17.840344 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 05:45:17.847414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 05:45:17.848758 systemd-networkd[1372]: eth0: DHCPv4 address 10.0.0.6/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 05:45:17.848772 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 05:45:17.850846 systemd-timesyncd[1374]: Network configuration changed, trying to establish connection. Jul 7 05:45:17.851628 systemd-timesyncd[1374]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 05:45:17.851706 systemd-timesyncd[1374]: Initial clock synchronization to Mon 2025-07-07 05:45:17.559378 UTC. Jul 7 05:45:17.856300 systemd-resolved[1306]: Defaulting to hostname 'linux'. Jul 7 05:45:17.860932 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 05:45:17.862042 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 05:45:17.863912 systemd[1]: Reached target network.target - Network. Jul 7 05:45:17.864564 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 05:45:17.875005 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 05:45:17.889996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 05:45:17.899724 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 05:45:17.902219 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 05:45:17.928915 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:45:17.950754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 05:45:17.960306 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 05:45:17.961514 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 05:45:17.963802 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 05:45:17.964678 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 05:45:17.965636 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 05:45:17.966852 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 05:45:17.967798 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 05:45:17.968929 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 05:45:17.969893 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 05:45:17.969927 systemd[1]: Reached target paths.target - Path Units. Jul 7 05:45:17.970562 systemd[1]: Reached target timers.target - Timer Units. Jul 7 05:45:17.972043 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 05:45:17.974238 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 05:45:17.982739 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 05:45:17.984844 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 05:45:17.986139 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 05:45:17.987133 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 05:45:17.987909 systemd[1]: Reached target basic.target - Basic System. Jul 7 05:45:17.988681 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:45:17.988723 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 05:45:17.989768 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 05:45:17.991579 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 05:45:17.993809 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 05:45:17.995205 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 05:45:18.004062 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 05:45:18.005139 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 05:45:18.006953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 05:45:18.011305 jq[1411]: false Jul 7 05:45:18.010968 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 05:45:18.012954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 05:45:18.015722 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 05:45:18.018236 extend-filesystems[1412]: Found loop3 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found loop4 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found loop5 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda1 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda2 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda3 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found usr Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda4 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda6 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda7 Jul 7 05:45:18.020568 extend-filesystems[1412]: Found vda9 Jul 7 05:45:18.020568 extend-filesystems[1412]: Checking size of /dev/vda9 Jul 7 05:45:18.020913 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 05:45:18.028947 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 05:45:18.029421 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 05:45:18.030477 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 05:45:18.034760 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 05:45:18.037385 dbus-daemon[1410]: [system] SELinux support is enabled Jul 7 05:45:18.037732 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 05:45:18.038746 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 05:45:18.043395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 05:45:18.043546 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 05:45:18.044991 jq[1425]: true Jul 7 05:45:18.046124 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 05:45:18.046298 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 05:45:18.048144 extend-filesystems[1412]: Resized partition /dev/vda9 Jul 7 05:45:18.053856 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 05:45:18.054093 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 05:45:18.056504 jq[1434]: true Jul 7 05:45:18.064381 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Jul 7 05:45:18.065194 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 05:45:18.065223 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 05:45:18.066375 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 05:45:18.066401 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 05:45:18.073927 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1359) Jul 7 05:45:18.073990 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 05:45:18.071808 (ntainerd)[1443]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 05:45:18.098408 update_engine[1423]: I20250707 05:45:18.098197 1423 main.cc:92] Flatcar Update Engine starting Jul 7 05:45:18.100664 systemd[1]: Started update-engine.service - Update Engine. Jul 7 05:45:18.101122 tar[1433]: linux-arm64/helm Jul 7 05:45:18.101503 update_engine[1423]: I20250707 05:45:18.101459 1423 update_check_scheduler.cc:74] Next update check in 7m42s Jul 7 05:45:18.110038 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 05:45:18.118715 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 05:45:18.155123 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Jul 7 05:45:18.155976 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 05:45:18.155976 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 05:45:18.155976 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 05:45:18.164099 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Jul 7 05:45:18.157936 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 05:45:18.159783 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 05:45:18.161162 systemd-logind[1418]: New seat seat0. Jul 7 05:45:18.166525 bash[1464]: Updated "/home/core/.ssh/authorized_keys" Jul 7 05:45:18.167509 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 05:45:18.170379 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 05:45:18.173365 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 05:45:18.206991 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 05:45:18.318483 containerd[1443]: time="2025-07-07T05:45:18.318387380Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 05:45:18.348117 containerd[1443]: time="2025-07-07T05:45:18.347892091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349469384Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349514089Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349539024Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349732722Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349754728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349810571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.349822287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.350015060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.350030437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.350043926Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350559 containerd[1443]: time="2025-07-07T05:45:18.350053175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350806 containerd[1443]: time="2025-07-07T05:45:18.350128288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350806 containerd[1443]: time="2025-07-07T05:45:18.350331968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350806 containerd[1443]: time="2025-07-07T05:45:18.350434598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 05:45:18.350806 containerd[1443]: time="2025-07-07T05:45:18.350448587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 05:45:18.351139 containerd[1443]: time="2025-07-07T05:45:18.351111577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 05:45:18.351320 containerd[1443]: time="2025-07-07T05:45:18.351302577Z" level=info msg="metadata content store policy set" policy=shared Jul 7 05:45:18.354624 containerd[1443]: time="2025-07-07T05:45:18.354498901Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 05:45:18.354624 containerd[1443]: time="2025-07-07T05:45:18.354548732Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 05:45:18.354624 containerd[1443]: time="2025-07-07T05:45:18.354565188Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 05:45:18.354624 containerd[1443]: time="2025-07-07T05:45:18.354583456Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 05:45:18.354624 containerd[1443]: time="2025-07-07T05:45:18.354605076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 05:45:18.355107 containerd[1443]: time="2025-07-07T05:45:18.355014285Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355568094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355723137Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355739824Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355751347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355765992Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355794974Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355807846Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355820679Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355834939Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355846847Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355858409Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355869855Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355888585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356298 containerd[1443]: time="2025-07-07T05:45:18.355901265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355916025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355927818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355939072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355950788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355961848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355973333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.355989558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356006361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356017884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356031758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356043667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356058273Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356077890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356089336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.356566 containerd[1443]: time="2025-07-07T05:45:18.356108644Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 05:45:18.356826 containerd[1443]: time="2025-07-07T05:45:18.356219097Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 05:45:18.356826 containerd[1443]: time="2025-07-07T05:45:18.356236787Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 05:45:18.356826 containerd[1443]: time="2025-07-07T05:45:18.356247154Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 05:45:18.356826 containerd[1443]: time="2025-07-07T05:45:18.356257868Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 05:45:18.356826 containerd[1443]: time="2025-07-07T05:45:18.356266462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.357077 containerd[1443]: time="2025-07-07T05:45:18.356277870Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 05:45:18.357374 containerd[1443]: time="2025-07-07T05:45:18.357131512Z" level=info msg="NRI interface is disabled by configuration." Jul 7 05:45:18.357374 containerd[1443]: time="2025-07-07T05:45:18.357153133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 05:45:18.357952 containerd[1443]: time="2025-07-07T05:45:18.357815390Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 05:45:18.358798 containerd[1443]: time="2025-07-07T05:45:18.358068978Z" level=info msg="Connect containerd service" Jul 7 05:45:18.358798 containerd[1443]: time="2025-07-07T05:45:18.358108134Z" level=info msg="using legacy CRI server" Jul 7 05:45:18.358798 containerd[1443]: time="2025-07-07T05:45:18.358115418Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 05:45:18.358798 containerd[1443]: time="2025-07-07T05:45:18.358215812Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 05:45:18.359523 containerd[1443]: time="2025-07-07T05:45:18.359496585Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:45:18.361237 containerd[1443]: time="2025-07-07T05:45:18.361208341Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 05:45:18.361448 containerd[1443]: time="2025-07-07T05:45:18.361420923Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 05:45:18.361791 containerd[1443]: time="2025-07-07T05:45:18.361737677Z" level=info msg="Start subscribing containerd event" Jul 7 05:45:18.361852 containerd[1443]: time="2025-07-07T05:45:18.361804388Z" level=info msg="Start recovering state" Jul 7 05:45:18.361903 containerd[1443]: time="2025-07-07T05:45:18.361883278Z" level=info msg="Start event monitor" Jul 7 05:45:18.361903 containerd[1443]: time="2025-07-07T05:45:18.361901430Z" level=info msg="Start snapshots syncer" Jul 7 05:45:18.361951 containerd[1443]: time="2025-07-07T05:45:18.361912722Z" level=info msg="Start cni network conf syncer for default" Jul 7 05:45:18.361951 containerd[1443]: time="2025-07-07T05:45:18.361924129Z" level=info msg="Start streaming server" Jul 7 05:45:18.362201 containerd[1443]: time="2025-07-07T05:45:18.362053428Z" level=info msg="containerd successfully booted in 0.044433s" Jul 7 05:45:18.362143 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 05:45:18.459623 tar[1433]: linux-arm64/LICENSE Jul 7 05:45:18.459623 tar[1433]: linux-arm64/README.md Jul 7 05:45:18.470511 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 05:45:18.548395 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 05:45:18.567047 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 05:45:18.580990 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 05:45:18.587488 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 05:45:18.587722 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 05:45:18.590282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 05:45:18.607750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 05:45:18.619090 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 05:45:18.621166 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 7 05:45:18.622114 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 05:45:19.860891 systemd-networkd[1372]: eth0: Gained IPv6LL Jul 7 05:45:19.863450 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 05:45:19.864971 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 05:45:19.876981 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 05:45:19.879236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:19.881045 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 05:45:19.907318 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 05:45:19.920345 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 05:45:19.920563 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 05:45:19.921790 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 05:45:20.500547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:20.501833 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 05:45:20.506047 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:45:20.506805 systemd[1]: Startup finished in 582ms (kernel) + 5.610s (initrd) + 4.212s (userspace) = 10.405s. Jul 7 05:45:20.962636 kubelet[1521]: E0707 05:45:20.962520 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:45:20.964886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:45:20.965036 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:45:23.301793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 05:45:23.303041 systemd[1]: Started sshd@0-10.0.0.6:22-10.0.0.1:38100.service - OpenSSH per-connection server daemon (10.0.0.1:38100). Jul 7 05:45:23.350148 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 38100 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:23.351941 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:23.364841 systemd-logind[1418]: New session 1 of user core. Jul 7 05:45:23.365460 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 05:45:23.376034 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 05:45:23.389753 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 05:45:23.391695 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 05:45:23.399775 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 05:45:23.474445 systemd[1538]: Queued start job for default target default.target. Jul 7 05:45:23.487686 systemd[1538]: Created slice app.slice - User Application Slice. Jul 7 05:45:23.487730 systemd[1538]: Reached target paths.target - Paths. Jul 7 05:45:23.487743 systemd[1538]: Reached target timers.target - Timers. Jul 7 05:45:23.489034 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 05:45:23.499342 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 05:45:23.499415 systemd[1538]: Reached target sockets.target - Sockets. Jul 7 05:45:23.499427 systemd[1538]: Reached target basic.target - Basic System. Jul 7 05:45:23.499467 systemd[1538]: Reached target default.target - Main User Target. Jul 7 05:45:23.499494 systemd[1538]: Startup finished in 93ms. Jul 7 05:45:23.499804 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 05:45:23.501139 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 05:45:23.561314 systemd[1]: Started sshd@1-10.0.0.6:22-10.0.0.1:38104.service - OpenSSH per-connection server daemon (10.0.0.1:38104). Jul 7 05:45:23.604333 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 38104 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:23.605715 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:23.609632 systemd-logind[1418]: New session 2 of user core. Jul 7 05:45:23.619964 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 05:45:23.670890 sshd[1549]: pam_unix(sshd:session): session closed for user core Jul 7 05:45:23.683229 systemd[1]: sshd@1-10.0.0.6:22-10.0.0.1:38104.service: Deactivated successfully. Jul 7 05:45:23.684818 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 05:45:23.686111 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Jul 7 05:45:23.687273 systemd[1]: Started sshd@2-10.0.0.6:22-10.0.0.1:38112.service - OpenSSH per-connection server daemon (10.0.0.1:38112). Jul 7 05:45:23.688140 systemd-logind[1418]: Removed session 2. Jul 7 05:45:23.719633 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 38112 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:23.721006 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:23.725166 systemd-logind[1418]: New session 3 of user core. Jul 7 05:45:23.736874 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 05:45:23.784102 sshd[1556]: pam_unix(sshd:session): session closed for user core Jul 7 05:45:23.801136 systemd[1]: sshd@2-10.0.0.6:22-10.0.0.1:38112.service: Deactivated successfully. Jul 7 05:45:23.802573 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 05:45:23.805059 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Jul 7 05:45:23.806598 systemd-logind[1418]: Removed session 3. Jul 7 05:45:23.826335 systemd[1]: Started sshd@3-10.0.0.6:22-10.0.0.1:38114.service - OpenSSH per-connection server daemon (10.0.0.1:38114). Jul 7 05:45:23.854163 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 38114 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:23.855505 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:23.860610 systemd-logind[1418]: New session 4 of user core. Jul 7 05:45:23.867928 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 05:45:23.919466 sshd[1563]: pam_unix(sshd:session): session closed for user core Jul 7 05:45:23.932193 systemd[1]: sshd@3-10.0.0.6:22-10.0.0.1:38114.service: Deactivated successfully. Jul 7 05:45:23.933998 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 05:45:23.935383 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Jul 7 05:45:23.936751 systemd[1]: Started sshd@4-10.0.0.6:22-10.0.0.1:38122.service - OpenSSH per-connection server daemon (10.0.0.1:38122). Jul 7 05:45:23.937713 systemd-logind[1418]: Removed session 4. Jul 7 05:45:23.968863 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 38122 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:23.970108 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:23.974305 systemd-logind[1418]: New session 5 of user core. Jul 7 05:45:23.985861 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 05:45:24.046386 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 05:45:24.046716 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:45:24.060763 sudo[1573]: pam_unix(sudo:session): session closed for user root Jul 7 05:45:24.062627 sshd[1570]: pam_unix(sshd:session): session closed for user core Jul 7 05:45:24.071213 systemd[1]: sshd@4-10.0.0.6:22-10.0.0.1:38122.service: Deactivated successfully. Jul 7 05:45:24.072908 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 05:45:24.073557 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Jul 7 05:45:24.083120 systemd[1]: Started sshd@5-10.0.0.6:22-10.0.0.1:38136.service - OpenSSH per-connection server daemon (10.0.0.1:38136). Jul 7 05:45:24.084506 systemd-logind[1418]: Removed session 5. Jul 7 05:45:24.117549 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 38136 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:24.118489 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:24.122545 systemd-logind[1418]: New session 6 of user core. Jul 7 05:45:24.132952 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 05:45:24.193040 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 05:45:24.193347 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:45:24.196580 sudo[1582]: pam_unix(sudo:session): session closed for user root Jul 7 05:45:24.201560 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 05:45:24.201880 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:45:24.223006 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 05:45:24.224271 auditctl[1585]: No rules Jul 7 05:45:24.225167 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 05:45:24.225396 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 05:45:24.227153 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 05:45:24.251888 augenrules[1603]: No rules Jul 7 05:45:24.253206 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 05:45:24.254591 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 7 05:45:24.256715 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 7 05:45:24.267226 systemd[1]: sshd@5-10.0.0.6:22-10.0.0.1:38136.service: Deactivated successfully. Jul 7 05:45:24.269031 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 05:45:24.270477 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Jul 7 05:45:24.271878 systemd[1]: Started sshd@6-10.0.0.6:22-10.0.0.1:38138.service - OpenSSH per-connection server daemon (10.0.0.1:38138). Jul 7 05:45:24.274110 systemd-logind[1418]: Removed session 6. Jul 7 05:45:24.304643 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 38138 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:45:24.305919 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:45:24.309904 systemd-logind[1418]: New session 7 of user core. Jul 7 05:45:24.315882 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 05:45:24.366521 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 05:45:24.366854 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 05:45:24.683069 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 05:45:24.683643 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 05:45:24.940567 dockerd[1632]: time="2025-07-07T05:45:24.940427078Z" level=info msg="Starting up" Jul 7 05:45:25.323601 dockerd[1632]: time="2025-07-07T05:45:25.323448074Z" level=info msg="Loading containers: start." Jul 7 05:45:25.425717 kernel: Initializing XFRM netlink socket Jul 7 05:45:25.500827 systemd-networkd[1372]: docker0: Link UP Jul 7 05:45:25.522148 dockerd[1632]: time="2025-07-07T05:45:25.522093481Z" level=info msg="Loading containers: done." Jul 7 05:45:25.533649 dockerd[1632]: time="2025-07-07T05:45:25.533583973Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 05:45:25.533874 dockerd[1632]: time="2025-07-07T05:45:25.533725513Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 05:45:25.533874 dockerd[1632]: time="2025-07-07T05:45:25.533852860Z" level=info msg="Daemon has completed initialization" Jul 7 05:45:25.560751 dockerd[1632]: time="2025-07-07T05:45:25.560583644Z" level=info msg="API listen on /run/docker.sock" Jul 7 05:45:25.560948 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 05:45:26.270866 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3387328098-merged.mount: Deactivated successfully. Jul 7 05:45:26.426429 containerd[1443]: time="2025-07-07T05:45:26.426384866Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 7 05:45:27.199288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269595574.mount: Deactivated successfully. Jul 7 05:45:28.078153 containerd[1443]: time="2025-07-07T05:45:28.078107507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:28.079290 containerd[1443]: time="2025-07-07T05:45:28.079251529Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 7 05:45:28.082880 containerd[1443]: time="2025-07-07T05:45:28.082807909Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:28.085910 containerd[1443]: time="2025-07-07T05:45:28.085857565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:28.088106 containerd[1443]: time="2025-07-07T05:45:28.088057662Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.661624692s" Jul 7 05:45:28.088189 containerd[1443]: time="2025-07-07T05:45:28.088113402Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 7 05:45:28.091119 containerd[1443]: time="2025-07-07T05:45:28.091088659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 7 05:45:29.190581 containerd[1443]: time="2025-07-07T05:45:29.190126430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:29.190972 containerd[1443]: time="2025-07-07T05:45:29.190813723Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 7 05:45:29.191499 containerd[1443]: time="2025-07-07T05:45:29.191474441Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:29.194387 containerd[1443]: time="2025-07-07T05:45:29.194352640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:29.195638 containerd[1443]: time="2025-07-07T05:45:29.195597883Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.104470627s" Jul 7 05:45:29.195682 containerd[1443]: time="2025-07-07T05:45:29.195639450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 7 05:45:29.196181 containerd[1443]: time="2025-07-07T05:45:29.196144132Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 7 05:45:30.387753 containerd[1443]: time="2025-07-07T05:45:30.387546995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:30.388258 containerd[1443]: time="2025-07-07T05:45:30.388217904Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 7 05:45:30.388984 containerd[1443]: time="2025-07-07T05:45:30.388958774Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:30.392751 containerd[1443]: time="2025-07-07T05:45:30.392707395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:30.393809 containerd[1443]: time="2025-07-07T05:45:30.393772979Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.197474105s" Jul 7 05:45:30.393863 containerd[1443]: time="2025-07-07T05:45:30.393815940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 7 05:45:30.394913 containerd[1443]: time="2025-07-07T05:45:30.394879459Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 7 05:45:31.211038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 05:45:31.221929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:31.318769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:31.322384 (kubelet)[1853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:45:31.359565 kubelet[1853]: E0707 05:45:31.359520 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:45:31.362428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:45:31.362584 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:45:31.470778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014523193.mount: Deactivated successfully. Jul 7 05:45:31.898596 containerd[1443]: time="2025-07-07T05:45:31.898400022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:31.899304 containerd[1443]: time="2025-07-07T05:45:31.899223567Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 7 05:45:31.899971 containerd[1443]: time="2025-07-07T05:45:31.899937423Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:31.902028 containerd[1443]: time="2025-07-07T05:45:31.901980786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:31.903037 containerd[1443]: time="2025-07-07T05:45:31.902996844Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.508078701s" Jul 7 05:45:31.903078 containerd[1443]: time="2025-07-07T05:45:31.903037818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 7 05:45:31.903719 containerd[1443]: time="2025-07-07T05:45:31.903516080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 05:45:32.459678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914668348.mount: Deactivated successfully. Jul 7 05:45:33.169691 containerd[1443]: time="2025-07-07T05:45:33.169281310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:33.170077 containerd[1443]: time="2025-07-07T05:45:33.170031674Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 7 05:45:33.170813 containerd[1443]: time="2025-07-07T05:45:33.170761897Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:33.175450 containerd[1443]: time="2025-07-07T05:45:33.175379749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:33.176819 containerd[1443]: time="2025-07-07T05:45:33.176779656Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.273229501s" Jul 7 05:45:33.176879 containerd[1443]: time="2025-07-07T05:45:33.176823479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 7 05:45:33.177649 containerd[1443]: time="2025-07-07T05:45:33.177486993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 05:45:33.625894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount79641836.mount: Deactivated successfully. Jul 7 05:45:33.630123 containerd[1443]: time="2025-07-07T05:45:33.630073312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:33.630874 containerd[1443]: time="2025-07-07T05:45:33.630838005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 7 05:45:33.631467 containerd[1443]: time="2025-07-07T05:45:33.631427406Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:33.633521 containerd[1443]: time="2025-07-07T05:45:33.633487643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:33.634971 containerd[1443]: time="2025-07-07T05:45:33.634946658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 457.426787ms" Jul 7 05:45:33.635028 containerd[1443]: time="2025-07-07T05:45:33.634976709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 7 05:45:33.635576 containerd[1443]: time="2025-07-07T05:45:33.635407336Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 05:45:34.154373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895465130.mount: Deactivated successfully. Jul 7 05:45:35.816237 containerd[1443]: time="2025-07-07T05:45:35.815895742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:35.816655 containerd[1443]: time="2025-07-07T05:45:35.816497583Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 7 05:45:35.817521 containerd[1443]: time="2025-07-07T05:45:35.817454878Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:35.821346 containerd[1443]: time="2025-07-07T05:45:35.821304220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:45:35.823866 containerd[1443]: time="2025-07-07T05:45:35.823831767Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.188383981s" Jul 7 05:45:35.824089 containerd[1443]: time="2025-07-07T05:45:35.823977774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 7 05:45:41.613267 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 05:45:41.622949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:41.721369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:41.724677 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 05:45:41.755646 kubelet[2006]: E0707 05:45:41.755595 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 05:45:41.758254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 05:45:41.758494 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 05:45:42.047378 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:42.055127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:42.077270 systemd[1]: Reloading requested from client PID 2021 ('systemctl') (unit session-7.scope)... Jul 7 05:45:42.077285 systemd[1]: Reloading... Jul 7 05:45:42.154774 zram_generator::config[2060]: No configuration found. Jul 7 05:45:42.341531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:45:42.394561 systemd[1]: Reloading finished in 316 ms. Jul 7 05:45:42.438980 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:42.441686 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:45:42.443791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:42.455142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:42.549200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:42.552598 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:45:42.585747 kubelet[2107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:45:42.585747 kubelet[2107]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:45:42.585747 kubelet[2107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:45:42.586073 kubelet[2107]: I0707 05:45:42.585962 2107 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:45:43.662739 kubelet[2107]: I0707 05:45:43.661545 2107 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:45:43.662739 kubelet[2107]: I0707 05:45:43.661581 2107 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:45:43.662739 kubelet[2107]: I0707 05:45:43.662014 2107 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:45:43.692453 kubelet[2107]: E0707 05:45:43.692410 2107 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:43.693556 kubelet[2107]: I0707 05:45:43.693538 2107 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:45:43.700345 kubelet[2107]: E0707 05:45:43.700313 2107 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:45:43.700345 kubelet[2107]: I0707 05:45:43.700342 2107 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:45:43.703841 kubelet[2107]: I0707 05:45:43.703806 2107 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:45:43.704571 kubelet[2107]: I0707 05:45:43.704541 2107 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:45:43.704739 kubelet[2107]: I0707 05:45:43.704693 2107 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:45:43.704906 kubelet[2107]: I0707 05:45:43.704732 2107 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:45:43.704983 kubelet[2107]: I0707 05:45:43.704970 2107 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:45:43.704983 kubelet[2107]: I0707 05:45:43.704979 2107 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:45:43.705245 kubelet[2107]: I0707 05:45:43.705217 2107 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:45:43.707189 kubelet[2107]: I0707 05:45:43.707160 2107 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:45:43.707189 kubelet[2107]: I0707 05:45:43.707188 2107 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:45:43.707237 kubelet[2107]: I0707 05:45:43.707210 2107 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:45:43.707311 kubelet[2107]: I0707 05:45:43.707292 2107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:45:43.711174 kubelet[2107]: W0707 05:45:43.711049 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:43.711174 kubelet[2107]: E0707 05:45:43.711122 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:43.711332 kubelet[2107]: W0707 05:45:43.711176 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:43.711332 kubelet[2107]: E0707 05:45:43.711223 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:43.713243 kubelet[2107]: I0707 05:45:43.713224 2107 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:45:43.713956 kubelet[2107]: I0707 05:45:43.713941 2107 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:45:43.714062 kubelet[2107]: W0707 05:45:43.714049 2107 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 05:45:43.715096 kubelet[2107]: I0707 05:45:43.714993 2107 server.go:1274] "Started kubelet" Jul 7 05:45:43.715802 kubelet[2107]: I0707 05:45:43.715534 2107 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:45:43.720734 kubelet[2107]: I0707 05:45:43.719983 2107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:45:43.720734 kubelet[2107]: I0707 05:45:43.720374 2107 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:45:43.720734 kubelet[2107]: I0707 05:45:43.720530 2107 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:45:43.721487 kubelet[2107]: I0707 05:45:43.721466 2107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:45:43.722308 kubelet[2107]: I0707 05:45:43.722286 2107 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:45:43.724771 kubelet[2107]: E0707 05:45:43.723184 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 05:45:43.724771 kubelet[2107]: I0707 05:45:43.723502 2107 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:45:43.724771 kubelet[2107]: I0707 05:45:43.723565 2107 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:45:43.724771 kubelet[2107]: E0707 05:45:43.723827 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="200ms" Jul 7 05:45:43.724771 kubelet[2107]: W0707 05:45:43.723887 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:43.724771 kubelet[2107]: E0707 05:45:43.723934 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:43.724771 kubelet[2107]: E0707 05:45:43.724045 2107 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:45:43.724771 kubelet[2107]: I0707 05:45:43.724087 2107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:45:43.725154 kubelet[2107]: I0707 05:45:43.725124 2107 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:45:43.725257 kubelet[2107]: I0707 05:45:43.725232 2107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:45:43.725511 kubelet[2107]: E0707 05:45:43.723985 2107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.6:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.6:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe1e2aacf14b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 05:45:43.714968754 +0000 UTC m=+1.159424344,LastTimestamp:2025-07-07 05:45:43.714968754 +0000 UTC m=+1.159424344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 05:45:43.726586 kubelet[2107]: I0707 05:45:43.726565 2107 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:45:43.737865 kubelet[2107]: I0707 05:45:43.737834 2107 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:45:43.737865 kubelet[2107]: I0707 05:45:43.737855 2107 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:45:43.737865 kubelet[2107]: I0707 05:45:43.737874 2107 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:45:43.740927 kubelet[2107]: I0707 05:45:43.740756 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:45:43.741826 kubelet[2107]: I0707 05:45:43.741806 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:45:43.741880 kubelet[2107]: I0707 05:45:43.741832 2107 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:45:43.741880 kubelet[2107]: I0707 05:45:43.741850 2107 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:45:43.741940 kubelet[2107]: E0707 05:45:43.741889 2107 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:45:43.809480 kubelet[2107]: I0707 05:45:43.809435 2107 policy_none.go:49] "None policy: Start" Jul 7 05:45:43.810415 kubelet[2107]: I0707 05:45:43.810295 2107 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:45:43.810415 kubelet[2107]: I0707 05:45:43.810322 2107 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:45:43.810415 kubelet[2107]: W0707 05:45:43.810298 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:43.810415 kubelet[2107]: E0707 05:45:43.810357 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:43.817539 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 05:45:43.823282 kubelet[2107]: E0707 05:45:43.823246 2107 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 05:45:43.830640 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 05:45:43.833573 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 05:45:43.842405 kubelet[2107]: E0707 05:45:43.842365 2107 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 05:45:43.843498 kubelet[2107]: I0707 05:45:43.843479 2107 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:45:43.843735 kubelet[2107]: I0707 05:45:43.843691 2107 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:45:43.844036 kubelet[2107]: I0707 05:45:43.843721 2107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:45:43.844036 kubelet[2107]: I0707 05:45:43.843935 2107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:45:43.845215 kubelet[2107]: E0707 05:45:43.845183 2107 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 05:45:43.925092 kubelet[2107]: E0707 05:45:43.924969 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="400ms" Jul 7 05:45:43.945111 kubelet[2107]: I0707 05:45:43.945055 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 05:45:43.945811 kubelet[2107]: E0707 05:45:43.945766 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 7 05:45:44.049909 systemd[1]: Created slice kubepods-burstable-pod4c9722cc6bd2feedf68e1219145cba0b.slice - libcontainer container kubepods-burstable-pod4c9722cc6bd2feedf68e1219145cba0b.slice. Jul 7 05:45:44.072603 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 7 05:45:44.076408 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 7 05:45:44.125831 kubelet[2107]: I0707 05:45:44.125787 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c9722cc6bd2feedf68e1219145cba0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c9722cc6bd2feedf68e1219145cba0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:44.125831 kubelet[2107]: I0707 05:45:44.125828 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:44.125975 kubelet[2107]: I0707 05:45:44.125847 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:44.125975 kubelet[2107]: I0707 05:45:44.125869 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:44.125975 kubelet[2107]: I0707 05:45:44.125886 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:44.125975 kubelet[2107]: I0707 05:45:44.125903 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:44.125975 kubelet[2107]: I0707 05:45:44.125919 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 05:45:44.126077 kubelet[2107]: I0707 05:45:44.125933 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c9722cc6bd2feedf68e1219145cba0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c9722cc6bd2feedf68e1219145cba0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:44.126077 kubelet[2107]: I0707 05:45:44.125947 2107 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c9722cc6bd2feedf68e1219145cba0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c9722cc6bd2feedf68e1219145cba0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:44.147024 kubelet[2107]: I0707 05:45:44.146997 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 05:45:44.147352 kubelet[2107]: E0707 05:45:44.147313 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 7 05:45:44.325508 kubelet[2107]: E0707 05:45:44.325458 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="800ms" Jul 7 05:45:44.370987 kubelet[2107]: E0707 05:45:44.370944 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:44.371609 containerd[1443]: time="2025-07-07T05:45:44.371572787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c9722cc6bd2feedf68e1219145cba0b,Namespace:kube-system,Attempt:0,}" Jul 7 05:45:44.375179 kubelet[2107]: E0707 05:45:44.375124 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:44.375681 containerd[1443]: time="2025-07-07T05:45:44.375486063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 7 05:45:44.379149 kubelet[2107]: E0707 05:45:44.379126 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:44.379883 containerd[1443]: time="2025-07-07T05:45:44.379689170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 7 05:45:44.548834 kubelet[2107]: I0707 05:45:44.548800 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 05:45:44.549188 kubelet[2107]: E0707 05:45:44.549162 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 7 05:45:44.702525 kubelet[2107]: W0707 05:45:44.702362 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:44.702525 kubelet[2107]: E0707 05:45:44.702441 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:45.044874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093990126.mount: Deactivated successfully. Jul 7 05:45:45.050438 containerd[1443]: time="2025-07-07T05:45:45.050385618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:45:45.052289 containerd[1443]: time="2025-07-07T05:45:45.052262153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 7 05:45:45.053088 containerd[1443]: time="2025-07-07T05:45:45.053058482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:45:45.054103 containerd[1443]: time="2025-07-07T05:45:45.054032035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:45:45.055138 containerd[1443]: time="2025-07-07T05:45:45.054246102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:45:45.055138 containerd[1443]: time="2025-07-07T05:45:45.055087346Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:45:45.056023 containerd[1443]: time="2025-07-07T05:45:45.055985814Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 05:45:45.061532 containerd[1443]: time="2025-07-07T05:45:45.061486988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 05:45:45.062544 containerd[1443]: time="2025-07-07T05:45:45.062507215Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 690.850402ms" Jul 7 05:45:45.064281 containerd[1443]: time="2025-07-07T05:45:45.064249084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 684.488714ms" Jul 7 05:45:45.065634 containerd[1443]: time="2025-07-07T05:45:45.065604577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 690.058341ms" Jul 7 05:45:45.116640 kubelet[2107]: W0707 05:45:45.116587 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:45.116640 kubelet[2107]: E0707 05:45:45.116641 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:45.127262 kubelet[2107]: E0707 05:45:45.127215 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.6:6443: connect: connection refused" interval="1.6s" Jul 7 05:45:45.182230 kubelet[2107]: W0707 05:45:45.182151 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:45.182230 kubelet[2107]: E0707 05:45:45.182226 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:45.218869 containerd[1443]: time="2025-07-07T05:45:45.218215312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:45:45.219046 containerd[1443]: time="2025-07-07T05:45:45.218850841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:45:45.219219 containerd[1443]: time="2025-07-07T05:45:45.219095838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:45.219691 containerd[1443]: time="2025-07-07T05:45:45.219610646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:45:45.219691 containerd[1443]: time="2025-07-07T05:45:45.219669228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:45:45.219691 containerd[1443]: time="2025-07-07T05:45:45.219679538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:45.219988 containerd[1443]: time="2025-07-07T05:45:45.219937921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:45.219988 containerd[1443]: time="2025-07-07T05:45:45.219913705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:45.221150 containerd[1443]: time="2025-07-07T05:45:45.221046819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:45:45.221150 containerd[1443]: time="2025-07-07T05:45:45.221089217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:45:45.221150 containerd[1443]: time="2025-07-07T05:45:45.221099966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:45.221251 containerd[1443]: time="2025-07-07T05:45:45.221165901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:45.241877 systemd[1]: Started cri-containerd-002b45cabb77c81a815cd53078dd31e7956a93d06c2b3a0b6363e60f5ce2b6d2.scope - libcontainer container 002b45cabb77c81a815cd53078dd31e7956a93d06c2b3a0b6363e60f5ce2b6d2. Jul 7 05:45:45.245312 systemd[1]: Started cri-containerd-334bda33173ac1a1f7b8093fcfd5de517522e6f6f46dea45dfeb223507c85fb8.scope - libcontainer container 334bda33173ac1a1f7b8093fcfd5de517522e6f6f46dea45dfeb223507c85fb8. Jul 7 05:45:45.246364 systemd[1]: Started cri-containerd-9538f6bced7a186cccbfb02c5c8110c1bb94828b3bd5549128f30f965fba8288.scope - libcontainer container 9538f6bced7a186cccbfb02c5c8110c1bb94828b3bd5549128f30f965fba8288. Jul 7 05:45:45.280545 containerd[1443]: time="2025-07-07T05:45:45.280493517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"002b45cabb77c81a815cd53078dd31e7956a93d06c2b3a0b6363e60f5ce2b6d2\"" Jul 7 05:45:45.281630 containerd[1443]: time="2025-07-07T05:45:45.281503113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"9538f6bced7a186cccbfb02c5c8110c1bb94828b3bd5549128f30f965fba8288\"" Jul 7 05:45:45.282938 kubelet[2107]: E0707 05:45:45.282457 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:45.282938 kubelet[2107]: E0707 05:45:45.282650 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:45.283050 containerd[1443]: time="2025-07-07T05:45:45.282981644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4c9722cc6bd2feedf68e1219145cba0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"334bda33173ac1a1f7b8093fcfd5de517522e6f6f46dea45dfeb223507c85fb8\"" Jul 7 05:45:45.283532 kubelet[2107]: E0707 05:45:45.283503 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:45.285455 containerd[1443]: time="2025-07-07T05:45:45.285411231Z" level=info msg="CreateContainer within sandbox \"334bda33173ac1a1f7b8093fcfd5de517522e6f6f46dea45dfeb223507c85fb8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 05:45:45.285854 containerd[1443]: time="2025-07-07T05:45:45.285828936Z" level=info msg="CreateContainer within sandbox \"002b45cabb77c81a815cd53078dd31e7956a93d06c2b3a0b6363e60f5ce2b6d2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 05:45:45.286642 containerd[1443]: time="2025-07-07T05:45:45.286499749Z" level=info msg="CreateContainer within sandbox \"9538f6bced7a186cccbfb02c5c8110c1bb94828b3bd5549128f30f965fba8288\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 05:45:45.305901 containerd[1443]: time="2025-07-07T05:45:45.305146663Z" level=info msg="CreateContainer within sandbox \"002b45cabb77c81a815cd53078dd31e7956a93d06c2b3a0b6363e60f5ce2b6d2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"76b2dad695fb5c494096d21953822bc81f80dc103a348193f8fbf01070e3abe2\"" Jul 7 05:45:45.306217 containerd[1443]: time="2025-07-07T05:45:45.305989305Z" level=info msg="CreateContainer within sandbox \"334bda33173ac1a1f7b8093fcfd5de517522e6f6f46dea45dfeb223507c85fb8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e6f4d7b5006335c0cee6287073f8ff9dfb23679da38107a517884526ba27ade\"" Jul 7 05:45:45.306558 containerd[1443]: time="2025-07-07T05:45:45.306430627Z" level=info msg="CreateContainer within sandbox \"9538f6bced7a186cccbfb02c5c8110c1bb94828b3bd5549128f30f965fba8288\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f467fde2141031e9868c9be3b19997dbe8187eb6cf6a121bdc1b4542e6e001a5\"" Jul 7 05:45:45.306940 containerd[1443]: time="2025-07-07T05:45:45.306897923Z" level=info msg="StartContainer for \"f467fde2141031e9868c9be3b19997dbe8187eb6cf6a121bdc1b4542e6e001a5\"" Jul 7 05:45:45.308359 containerd[1443]: time="2025-07-07T05:45:45.308173016Z" level=info msg="StartContainer for \"76b2dad695fb5c494096d21953822bc81f80dc103a348193f8fbf01070e3abe2\"" Jul 7 05:45:45.308359 containerd[1443]: time="2025-07-07T05:45:45.308175134Z" level=info msg="StartContainer for \"0e6f4d7b5006335c0cee6287073f8ff9dfb23679da38107a517884526ba27ade\"" Jul 7 05:45:45.322536 kubelet[2107]: W0707 05:45:45.322418 2107 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.6:6443: connect: connection refused Jul 7 05:45:45.322536 kubelet[2107]: E0707 05:45:45.322497 2107 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.6:6443: connect: connection refused" logger="UnhandledError" Jul 7 05:45:45.334864 systemd[1]: Started cri-containerd-f467fde2141031e9868c9be3b19997dbe8187eb6cf6a121bdc1b4542e6e001a5.scope - libcontainer container f467fde2141031e9868c9be3b19997dbe8187eb6cf6a121bdc1b4542e6e001a5. Jul 7 05:45:45.339004 systemd[1]: Started cri-containerd-0e6f4d7b5006335c0cee6287073f8ff9dfb23679da38107a517884526ba27ade.scope - libcontainer container 0e6f4d7b5006335c0cee6287073f8ff9dfb23679da38107a517884526ba27ade. Jul 7 05:45:45.340319 systemd[1]: Started cri-containerd-76b2dad695fb5c494096d21953822bc81f80dc103a348193f8fbf01070e3abe2.scope - libcontainer container 76b2dad695fb5c494096d21953822bc81f80dc103a348193f8fbf01070e3abe2. Jul 7 05:45:45.350247 kubelet[2107]: I0707 05:45:45.350186 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 05:45:45.350567 kubelet[2107]: E0707 05:45:45.350504 2107 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.6:6443/api/v1/nodes\": dial tcp 10.0.0.6:6443: connect: connection refused" node="localhost" Jul 7 05:45:45.385812 containerd[1443]: time="2025-07-07T05:45:45.385693876Z" level=info msg="StartContainer for \"f467fde2141031e9868c9be3b19997dbe8187eb6cf6a121bdc1b4542e6e001a5\" returns successfully" Jul 7 05:45:45.386886 containerd[1443]: time="2025-07-07T05:45:45.386693842Z" level=info msg="StartContainer for \"76b2dad695fb5c494096d21953822bc81f80dc103a348193f8fbf01070e3abe2\" returns successfully" Jul 7 05:45:45.386886 containerd[1443]: time="2025-07-07T05:45:45.386802934Z" level=info msg="StartContainer for \"0e6f4d7b5006335c0cee6287073f8ff9dfb23679da38107a517884526ba27ade\" returns successfully" Jul 7 05:45:45.749780 kubelet[2107]: E0707 05:45:45.749425 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:45.753881 kubelet[2107]: E0707 05:45:45.753537 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:45.755461 kubelet[2107]: E0707 05:45:45.755380 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:46.756415 kubelet[2107]: E0707 05:45:46.756386 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:46.952865 kubelet[2107]: I0707 05:45:46.952536 2107 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 05:45:47.173600 kubelet[2107]: E0707 05:45:47.173500 2107 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 05:45:47.258515 kubelet[2107]: I0707 05:45:47.258471 2107 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 05:45:47.258515 kubelet[2107]: E0707 05:45:47.258510 2107 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 05:45:47.711159 kubelet[2107]: I0707 05:45:47.711105 2107 apiserver.go:52] "Watching apiserver" Jul 7 05:45:47.723771 kubelet[2107]: I0707 05:45:47.723720 2107 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:45:49.508972 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Jul 7 05:45:49.508988 systemd[1]: Reloading... Jul 7 05:45:49.580733 zram_generator::config[2424]: No configuration found. Jul 7 05:45:49.664246 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 05:45:49.730241 systemd[1]: Reloading finished in 220 ms. Jul 7 05:45:49.762159 kubelet[2107]: I0707 05:45:49.762070 2107 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:45:49.762103 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:49.782103 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 05:45:49.782300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:49.782340 systemd[1]: kubelet.service: Consumed 1.546s CPU time, 132.2M memory peak, 0B memory swap peak. Jul 7 05:45:49.792968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 05:45:49.896900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 05:45:49.901109 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 05:45:49.936872 kubelet[2466]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:45:49.937748 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 05:45:49.937748 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 05:45:49.937748 kubelet[2466]: I0707 05:45:49.937316 2466 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 05:45:49.943144 kubelet[2466]: I0707 05:45:49.942806 2466 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 05:45:49.943144 kubelet[2466]: I0707 05:45:49.942835 2466 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 05:45:49.943257 kubelet[2466]: I0707 05:45:49.943241 2466 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 05:45:49.945040 kubelet[2466]: I0707 05:45:49.945014 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 05:45:49.947231 kubelet[2466]: I0707 05:45:49.947198 2466 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 05:45:49.952823 kubelet[2466]: E0707 05:45:49.952780 2466 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 05:45:49.952823 kubelet[2466]: I0707 05:45:49.952816 2466 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 05:45:49.954995 kubelet[2466]: I0707 05:45:49.954975 2466 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 05:45:49.955086 kubelet[2466]: I0707 05:45:49.955077 2466 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 05:45:49.955196 kubelet[2466]: I0707 05:45:49.955172 2466 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 05:45:49.955347 kubelet[2466]: I0707 05:45:49.955196 2466 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 05:45:49.955425 kubelet[2466]: I0707 05:45:49.955355 2466 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 05:45:49.955425 kubelet[2466]: I0707 05:45:49.955363 2466 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 05:45:49.955425 kubelet[2466]: I0707 05:45:49.955404 2466 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:45:49.955515 kubelet[2466]: I0707 05:45:49.955510 2466 kubelet.go:408] "Attempting to sync node with API server" Jul 7 05:45:49.955544 kubelet[2466]: I0707 05:45:49.955523 2466 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 05:45:49.955544 kubelet[2466]: I0707 05:45:49.955540 2466 kubelet.go:314] "Adding apiserver pod source" Jul 7 05:45:49.955759 kubelet[2466]: I0707 05:45:49.955552 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 05:45:49.956318 kubelet[2466]: I0707 05:45:49.956297 2466 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 05:45:49.957021 kubelet[2466]: I0707 05:45:49.956999 2466 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 05:45:49.961586 kubelet[2466]: I0707 05:45:49.960132 2466 server.go:1274] "Started kubelet" Jul 7 05:45:49.961586 kubelet[2466]: I0707 05:45:49.960277 2466 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 05:45:49.961586 kubelet[2466]: I0707 05:45:49.960450 2466 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 05:45:49.961586 kubelet[2466]: I0707 05:45:49.960668 2466 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 05:45:49.961586 kubelet[2466]: I0707 05:45:49.961110 2466 server.go:449] "Adding debug handlers to kubelet server" Jul 7 05:45:49.962775 kubelet[2466]: I0707 05:45:49.962752 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 05:45:49.968980 kubelet[2466]: I0707 05:45:49.964404 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 05:45:49.969110 kubelet[2466]: I0707 05:45:49.969091 2466 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 05:45:49.969260 kubelet[2466]: E0707 05:45:49.969242 2466 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 05:45:49.969816 kubelet[2466]: I0707 05:45:49.969779 2466 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 05:45:49.970109 kubelet[2466]: I0707 05:45:49.970092 2466 reconciler.go:26] "Reconciler: start to sync state" Jul 7 05:45:49.981926 kubelet[2466]: I0707 05:45:49.981886 2466 factory.go:221] Registration of the systemd container factory successfully Jul 7 05:45:49.982029 kubelet[2466]: I0707 05:45:49.982004 2466 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 05:45:49.984922 kubelet[2466]: E0707 05:45:49.984743 2466 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 05:45:49.986542 kubelet[2466]: I0707 05:45:49.986052 2466 factory.go:221] Registration of the containerd container factory successfully Jul 7 05:45:49.996240 kubelet[2466]: I0707 05:45:49.995961 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 05:45:49.997682 kubelet[2466]: I0707 05:45:49.997655 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 05:45:49.997682 kubelet[2466]: I0707 05:45:49.997683 2466 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 05:45:49.997796 kubelet[2466]: I0707 05:45:49.997723 2466 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 05:45:49.997796 kubelet[2466]: E0707 05:45:49.997769 2466 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 05:45:50.025372 kubelet[2466]: I0707 05:45:50.025262 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 05:45:50.025372 kubelet[2466]: I0707 05:45:50.025284 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 05:45:50.025372 kubelet[2466]: I0707 05:45:50.025304 2466 state_mem.go:36] "Initialized new in-memory state store" Jul 7 05:45:50.025524 kubelet[2466]: I0707 05:45:50.025450 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 05:45:50.025524 kubelet[2466]: I0707 05:45:50.025461 2466 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 05:45:50.025524 kubelet[2466]: I0707 05:45:50.025477 2466 policy_none.go:49] "None policy: Start" Jul 7 05:45:50.026418 kubelet[2466]: I0707 05:45:50.026095 2466 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 05:45:50.026418 kubelet[2466]: I0707 05:45:50.026119 2466 state_mem.go:35] "Initializing new in-memory state store" Jul 7 05:45:50.026418 kubelet[2466]: I0707 05:45:50.026273 2466 state_mem.go:75] "Updated machine memory state" Jul 7 05:45:50.030776 kubelet[2466]: I0707 05:45:50.030282 2466 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 05:45:50.030776 kubelet[2466]: I0707 05:45:50.030461 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 05:45:50.030776 kubelet[2466]: I0707 05:45:50.030478 2466 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 05:45:50.030776 kubelet[2466]: I0707 05:45:50.030694 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 05:45:50.135929 kubelet[2466]: I0707 05:45:50.135882 2466 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 7 05:45:50.141807 kubelet[2466]: I0707 05:45:50.141764 2466 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 7 05:45:50.142517 kubelet[2466]: I0707 05:45:50.142181 2466 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 7 05:45:50.170802 kubelet[2466]: I0707 05:45:50.170766 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:50.170921 kubelet[2466]: I0707 05:45:50.170808 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:50.170921 kubelet[2466]: I0707 05:45:50.170836 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:50.170921 kubelet[2466]: I0707 05:45:50.170853 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:50.170921 kubelet[2466]: I0707 05:45:50.170874 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4c9722cc6bd2feedf68e1219145cba0b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c9722cc6bd2feedf68e1219145cba0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:50.170921 kubelet[2466]: I0707 05:45:50.170901 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4c9722cc6bd2feedf68e1219145cba0b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4c9722cc6bd2feedf68e1219145cba0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:50.171035 kubelet[2466]: I0707 05:45:50.170915 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 7 05:45:50.171035 kubelet[2466]: I0707 05:45:50.170929 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c9722cc6bd2feedf68e1219145cba0b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4c9722cc6bd2feedf68e1219145cba0b\") " pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:50.171035 kubelet[2466]: I0707 05:45:50.170943 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 05:45:50.413721 kubelet[2466]: E0707 05:45:50.413593 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:50.416046 kubelet[2466]: E0707 05:45:50.415970 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:50.416228 kubelet[2466]: E0707 05:45:50.416176 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:50.510622 sudo[2507]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 7 05:45:50.510919 sudo[2507]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 7 05:45:50.938152 sudo[2507]: pam_unix(sudo:session): session closed for user root Jul 7 05:45:50.956454 kubelet[2466]: I0707 05:45:50.956416 2466 apiserver.go:52] "Watching apiserver" Jul 7 05:45:50.970741 kubelet[2466]: I0707 05:45:50.970680 2466 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 05:45:51.011845 kubelet[2466]: E0707 05:45:51.011380 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:51.012109 kubelet[2466]: E0707 05:45:51.012087 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:51.019956 kubelet[2466]: E0707 05:45:51.019875 2466 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 05:45:51.020205 kubelet[2466]: E0707 05:45:51.020187 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:51.029887 kubelet[2466]: I0707 05:45:51.029789 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.029749922 podStartE2EDuration="1.029749922s" podCreationTimestamp="2025-07-07 05:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:45:51.029577778 +0000 UTC m=+1.125167139" watchObservedRunningTime="2025-07-07 05:45:51.029749922 +0000 UTC m=+1.125339283" Jul 7 05:45:51.046722 kubelet[2466]: I0707 05:45:51.046652 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.046636783 podStartE2EDuration="1.046636783s" podCreationTimestamp="2025-07-07 05:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:45:51.039717783 +0000 UTC m=+1.135307145" watchObservedRunningTime="2025-07-07 05:45:51.046636783 +0000 UTC m=+1.142226144" Jul 7 05:45:51.053864 kubelet[2466]: I0707 05:45:51.053817 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.05368972 podStartE2EDuration="1.05368972s" podCreationTimestamp="2025-07-07 05:45:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:45:51.046796605 +0000 UTC m=+1.142385966" watchObservedRunningTime="2025-07-07 05:45:51.05368972 +0000 UTC m=+1.149279081" Jul 7 05:45:52.012927 kubelet[2466]: E0707 05:45:52.012890 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:52.013246 kubelet[2466]: E0707 05:45:52.012956 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:52.378477 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 7 05:45:52.380184 sshd[1611]: pam_unix(sshd:session): session closed for user core Jul 7 05:45:52.384237 systemd[1]: sshd@6-10.0.0.6:22-10.0.0.1:38138.service: Deactivated successfully. Jul 7 05:45:52.385913 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 05:45:52.386790 systemd[1]: session-7.scope: Consumed 7.993s CPU time, 153.3M memory peak, 0B memory swap peak. Jul 7 05:45:52.387275 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Jul 7 05:45:52.388206 systemd-logind[1418]: Removed session 7. Jul 7 05:45:53.014461 kubelet[2466]: E0707 05:45:53.014355 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:53.413621 kubelet[2466]: E0707 05:45:53.413500 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:54.757292 kubelet[2466]: I0707 05:45:54.757136 2466 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 05:45:54.757615 containerd[1443]: time="2025-07-07T05:45:54.757418876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 05:45:54.758474 kubelet[2466]: I0707 05:45:54.757985 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 05:45:55.552103 systemd[1]: Created slice kubepods-besteffort-pod31a04227_3eac_4e79_9660_038512ad9d05.slice - libcontainer container kubepods-besteffort-pod31a04227_3eac_4e79_9660_038512ad9d05.slice. Jul 7 05:45:55.566880 systemd[1]: Created slice kubepods-burstable-pod3e1905be_e755_47a1_9f5a_a1168964b3b2.slice - libcontainer container kubepods-burstable-pod3e1905be_e755_47a1_9f5a_a1168964b3b2.slice. Jul 7 05:45:55.610234 kubelet[2466]: I0707 05:45:55.609731 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e1905be-e755-47a1-9f5a-a1168964b3b2-clustermesh-secrets\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610234 kubelet[2466]: I0707 05:45:55.609845 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a04227-3eac-4e79-9660-038512ad9d05-xtables-lock\") pod \"kube-proxy-5l2g9\" (UID: \"31a04227-3eac-4e79-9660-038512ad9d05\") " pod="kube-system/kube-proxy-5l2g9" Jul 7 05:45:55.610234 kubelet[2466]: I0707 05:45:55.609867 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-cgroup\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610234 kubelet[2466]: I0707 05:45:55.609884 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-etc-cni-netd\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610234 kubelet[2466]: I0707 05:45:55.609900 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-kernel\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610498 kubelet[2466]: I0707 05:45:55.609916 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31a04227-3eac-4e79-9660-038512ad9d05-kube-proxy\") pod \"kube-proxy-5l2g9\" (UID: \"31a04227-3eac-4e79-9660-038512ad9d05\") " pod="kube-system/kube-proxy-5l2g9" Jul 7 05:45:55.610498 kubelet[2466]: I0707 05:45:55.609930 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a04227-3eac-4e79-9660-038512ad9d05-lib-modules\") pod \"kube-proxy-5l2g9\" (UID: \"31a04227-3eac-4e79-9660-038512ad9d05\") " pod="kube-system/kube-proxy-5l2g9" Jul 7 05:45:55.610498 kubelet[2466]: I0707 05:45:55.609943 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-hostproc\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610498 kubelet[2466]: I0707 05:45:55.609959 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-net\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610498 kubelet[2466]: I0707 05:45:55.609975 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-bpf-maps\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610498 kubelet[2466]: I0707 05:45:55.609993 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-lib-modules\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610620 kubelet[2466]: I0707 05:45:55.610006 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-hubble-tls\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610620 kubelet[2466]: I0707 05:45:55.610022 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-run\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610620 kubelet[2466]: I0707 05:45:55.610038 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-xtables-lock\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610620 kubelet[2466]: I0707 05:45:55.610052 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf8rw\" (UniqueName: \"kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-kube-api-access-rf8rw\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610620 kubelet[2466]: I0707 05:45:55.610083 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scbgl\" (UniqueName: \"kubernetes.io/projected/31a04227-3eac-4e79-9660-038512ad9d05-kube-api-access-scbgl\") pod \"kube-proxy-5l2g9\" (UID: \"31a04227-3eac-4e79-9660-038512ad9d05\") " pod="kube-system/kube-proxy-5l2g9" Jul 7 05:45:55.610767 kubelet[2466]: I0707 05:45:55.610098 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-config-path\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.610767 kubelet[2466]: I0707 05:45:55.610111 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cni-path\") pod \"cilium-wm4n4\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " pod="kube-system/cilium-wm4n4" Jul 7 05:45:55.775436 systemd[1]: Created slice kubepods-besteffort-pod9b0ad1a5_0fe9_40ff_bc11_56cbbe3ff419.slice - libcontainer container kubepods-besteffort-pod9b0ad1a5_0fe9_40ff_bc11_56cbbe3ff419.slice. Jul 7 05:45:55.811424 kubelet[2466]: I0707 05:45:55.811302 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhk5\" (UniqueName: \"kubernetes.io/projected/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-kube-api-access-bnhk5\") pod \"cilium-operator-5d85765b45-lzphd\" (UID: \"9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419\") " pod="kube-system/cilium-operator-5d85765b45-lzphd" Jul 7 05:45:55.811424 kubelet[2466]: I0707 05:45:55.811369 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-cilium-config-path\") pod \"cilium-operator-5d85765b45-lzphd\" (UID: \"9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419\") " pod="kube-system/cilium-operator-5d85765b45-lzphd" Jul 7 05:45:55.864517 kubelet[2466]: E0707 05:45:55.864488 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:55.865292 containerd[1443]: time="2025-07-07T05:45:55.865127886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5l2g9,Uid:31a04227-3eac-4e79-9660-038512ad9d05,Namespace:kube-system,Attempt:0,}" Jul 7 05:45:55.870136 kubelet[2466]: E0707 05:45:55.870108 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:55.871170 containerd[1443]: time="2025-07-07T05:45:55.871006297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wm4n4,Uid:3e1905be-e755-47a1-9f5a-a1168964b3b2,Namespace:kube-system,Attempt:0,}" Jul 7 05:45:55.891587 containerd[1443]: time="2025-07-07T05:45:55.891354552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:45:55.891587 containerd[1443]: time="2025-07-07T05:45:55.891406758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:45:55.891587 containerd[1443]: time="2025-07-07T05:45:55.891417319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:55.891587 containerd[1443]: time="2025-07-07T05:45:55.891481886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:55.891967 containerd[1443]: time="2025-07-07T05:45:55.891671947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:45:55.891967 containerd[1443]: time="2025-07-07T05:45:55.891811642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:45:55.892041 containerd[1443]: time="2025-07-07T05:45:55.891852167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:55.892041 containerd[1443]: time="2025-07-07T05:45:55.891954178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:55.914893 systemd[1]: Started cri-containerd-19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2.scope - libcontainer container 19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2. Jul 7 05:45:55.916547 systemd[1]: Started cri-containerd-a16dba8cf431fbec7a4bd3a698c44cb1295d4932dd7ed0db1687aa395ed1c54e.scope - libcontainer container a16dba8cf431fbec7a4bd3a698c44cb1295d4932dd7ed0db1687aa395ed1c54e. Jul 7 05:45:55.937949 containerd[1443]: time="2025-07-07T05:45:55.937884067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wm4n4,Uid:3e1905be-e755-47a1-9f5a-a1168964b3b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\"" Jul 7 05:45:55.938766 kubelet[2466]: E0707 05:45:55.938726 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:55.940455 containerd[1443]: time="2025-07-07T05:45:55.940407146Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 7 05:45:55.942652 containerd[1443]: time="2025-07-07T05:45:55.942566345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5l2g9,Uid:31a04227-3eac-4e79-9660-038512ad9d05,Namespace:kube-system,Attempt:0,} returns sandbox id \"a16dba8cf431fbec7a4bd3a698c44cb1295d4932dd7ed0db1687aa395ed1c54e\"" Jul 7 05:45:55.944288 kubelet[2466]: E0707 05:45:55.944202 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:55.947498 containerd[1443]: time="2025-07-07T05:45:55.947473329Z" level=info msg="CreateContainer within sandbox \"a16dba8cf431fbec7a4bd3a698c44cb1295d4932dd7ed0db1687aa395ed1c54e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 05:45:55.970882 containerd[1443]: time="2025-07-07T05:45:55.970766629Z" level=info msg="CreateContainer within sandbox \"a16dba8cf431fbec7a4bd3a698c44cb1295d4932dd7ed0db1687aa395ed1c54e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1e95309df9ec1532df262e12a1cb23b05864bf27e80311fee479e5acad614d70\"" Jul 7 05:45:55.971364 containerd[1443]: time="2025-07-07T05:45:55.971337733Z" level=info msg="StartContainer for \"1e95309df9ec1532df262e12a1cb23b05864bf27e80311fee479e5acad614d70\"" Jul 7 05:45:56.000862 systemd[1]: Started cri-containerd-1e95309df9ec1532df262e12a1cb23b05864bf27e80311fee479e5acad614d70.scope - libcontainer container 1e95309df9ec1532df262e12a1cb23b05864bf27e80311fee479e5acad614d70. Jul 7 05:45:56.025090 containerd[1443]: time="2025-07-07T05:45:56.025048539Z" level=info msg="StartContainer for \"1e95309df9ec1532df262e12a1cb23b05864bf27e80311fee479e5acad614d70\" returns successfully" Jul 7 05:45:56.030497 kubelet[2466]: E0707 05:45:56.030295 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:56.040127 kubelet[2466]: I0707 05:45:56.039965 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5l2g9" podStartSLOduration=1.039936381 podStartE2EDuration="1.039936381s" podCreationTimestamp="2025-07-07 05:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:45:56.03945009 +0000 UTC m=+6.135039451" watchObservedRunningTime="2025-07-07 05:45:56.039936381 +0000 UTC m=+6.135525702" Jul 7 05:45:56.078859 kubelet[2466]: E0707 05:45:56.078743 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:45:56.080675 containerd[1443]: time="2025-07-07T05:45:56.080309655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lzphd,Uid:9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419,Namespace:kube-system,Attempt:0,}" Jul 7 05:45:56.110710 containerd[1443]: time="2025-07-07T05:45:56.110406012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:45:56.110710 containerd[1443]: time="2025-07-07T05:45:56.110472259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:45:56.110710 containerd[1443]: time="2025-07-07T05:45:56.110483580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:56.110710 containerd[1443]: time="2025-07-07T05:45:56.110594592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:45:56.129888 systemd[1]: Started cri-containerd-159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291.scope - libcontainer container 159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291. Jul 7 05:45:56.160066 containerd[1443]: time="2025-07-07T05:45:56.160028296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lzphd,Uid:9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419,Namespace:kube-system,Attempt:0,} returns sandbox id \"159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291\"" Jul 7 05:45:56.160856 kubelet[2466]: E0707 05:45:56.160631 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:01.706101 kubelet[2466]: E0707 05:46:01.706052 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:02.212695 kubelet[2466]: E0707 05:46:02.212510 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:03.102614 update_engine[1423]: I20250707 05:46:03.102310 1423 update_attempter.cc:509] Updating boot flags... Jul 7 05:46:03.150743 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2846) Jul 7 05:46:03.188831 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2844) Jul 7 05:46:03.218752 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2844) Jul 7 05:46:03.421135 kubelet[2466]: E0707 05:46:03.420541 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:08.925565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4131862144.mount: Deactivated successfully. Jul 7 05:46:10.243362 containerd[1443]: time="2025-07-07T05:46:10.243287029Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:46:10.243776 containerd[1443]: time="2025-07-07T05:46:10.243724172Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 7 05:46:10.244760 containerd[1443]: time="2025-07-07T05:46:10.244682542Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:46:10.246633 containerd[1443]: time="2025-07-07T05:46:10.246511678Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.306054646s" Jul 7 05:46:10.246633 containerd[1443]: time="2025-07-07T05:46:10.246546919Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 7 05:46:10.249039 containerd[1443]: time="2025-07-07T05:46:10.249006688Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 7 05:46:10.253640 containerd[1443]: time="2025-07-07T05:46:10.253604328Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 05:46:10.271844 containerd[1443]: time="2025-07-07T05:46:10.271795357Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\"" Jul 7 05:46:10.272347 containerd[1443]: time="2025-07-07T05:46:10.272310784Z" level=info msg="StartContainer for \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\"" Jul 7 05:46:10.296849 systemd[1]: Started cri-containerd-c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225.scope - libcontainer container c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225. Jul 7 05:46:10.322173 containerd[1443]: time="2025-07-07T05:46:10.322113584Z" level=info msg="StartContainer for \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\" returns successfully" Jul 7 05:46:10.377548 systemd[1]: cri-containerd-c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225.scope: Deactivated successfully. Jul 7 05:46:10.484622 containerd[1443]: time="2025-07-07T05:46:10.484548142Z" level=info msg="shim disconnected" id=c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225 namespace=k8s.io Jul 7 05:46:10.484622 containerd[1443]: time="2025-07-07T05:46:10.484606625Z" level=warning msg="cleaning up after shim disconnected" id=c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225 namespace=k8s.io Jul 7 05:46:10.484622 containerd[1443]: time="2025-07-07T05:46:10.484615705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:46:11.058233 kubelet[2466]: E0707 05:46:11.058047 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:11.060542 containerd[1443]: time="2025-07-07T05:46:11.060485109Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 05:46:11.076662 containerd[1443]: time="2025-07-07T05:46:11.076613234Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\"" Jul 7 05:46:11.077269 containerd[1443]: time="2025-07-07T05:46:11.077240426Z" level=info msg="StartContainer for \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\"" Jul 7 05:46:11.104890 systemd[1]: Started cri-containerd-874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3.scope - libcontainer container 874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3. Jul 7 05:46:11.165043 containerd[1443]: time="2025-07-07T05:46:11.164976768Z" level=info msg="StartContainer for \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\" returns successfully" Jul 7 05:46:11.165114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 05:46:11.166956 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:46:11.167036 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:46:11.176081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 05:46:11.176301 systemd[1]: cri-containerd-874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3.scope: Deactivated successfully. Jul 7 05:46:11.194403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 05:46:11.238907 containerd[1443]: time="2025-07-07T05:46:11.238818097Z" level=info msg="shim disconnected" id=874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3 namespace=k8s.io Jul 7 05:46:11.238907 containerd[1443]: time="2025-07-07T05:46:11.238882900Z" level=warning msg="cleaning up after shim disconnected" id=874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3 namespace=k8s.io Jul 7 05:46:11.238907 containerd[1443]: time="2025-07-07T05:46:11.238893941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:46:11.248929 containerd[1443]: time="2025-07-07T05:46:11.248818916Z" level=warning msg="cleanup warnings time=\"2025-07-07T05:46:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 7 05:46:11.261232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225-rootfs.mount: Deactivated successfully. Jul 7 05:46:11.544563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746453970.mount: Deactivated successfully. Jul 7 05:46:11.833080 containerd[1443]: time="2025-07-07T05:46:11.832619598Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:46:11.833080 containerd[1443]: time="2025-07-07T05:46:11.832965256Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 7 05:46:11.833847 containerd[1443]: time="2025-07-07T05:46:11.833790857Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 05:46:11.835359 containerd[1443]: time="2025-07-07T05:46:11.835243769Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.58619832s" Jul 7 05:46:11.835359 containerd[1443]: time="2025-07-07T05:46:11.835278491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 7 05:46:11.837231 containerd[1443]: time="2025-07-07T05:46:11.837085901Z" level=info msg="CreateContainer within sandbox \"159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 7 05:46:11.846890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795507075.mount: Deactivated successfully. Jul 7 05:46:11.858748 containerd[1443]: time="2025-07-07T05:46:11.858599736Z" level=info msg="CreateContainer within sandbox \"159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\"" Jul 7 05:46:11.861338 containerd[1443]: time="2025-07-07T05:46:11.860459589Z" level=info msg="StartContainer for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\"" Jul 7 05:46:11.883879 systemd[1]: Started cri-containerd-8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb.scope - libcontainer container 8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb. Jul 7 05:46:11.904969 containerd[1443]: time="2025-07-07T05:46:11.904783523Z" level=info msg="StartContainer for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" returns successfully" Jul 7 05:46:12.069462 kubelet[2466]: E0707 05:46:12.066719 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:12.073756 containerd[1443]: time="2025-07-07T05:46:12.073640287Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 05:46:12.076869 kubelet[2466]: E0707 05:46:12.074781 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:12.096592 containerd[1443]: time="2025-07-07T05:46:12.096449978Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\"" Jul 7 05:46:12.097391 containerd[1443]: time="2025-07-07T05:46:12.097356741Z" level=info msg="StartContainer for \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\"" Jul 7 05:46:12.140905 systemd[1]: Started cri-containerd-e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71.scope - libcontainer container e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71. Jul 7 05:46:12.192048 containerd[1443]: time="2025-07-07T05:46:12.191985389Z" level=info msg="StartContainer for \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\" returns successfully" Jul 7 05:46:12.217409 systemd[1]: cri-containerd-e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71.scope: Deactivated successfully. Jul 7 05:46:12.264647 containerd[1443]: time="2025-07-07T05:46:12.264491739Z" level=info msg="shim disconnected" id=e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71 namespace=k8s.io Jul 7 05:46:12.264647 containerd[1443]: time="2025-07-07T05:46:12.264638186Z" level=warning msg="cleaning up after shim disconnected" id=e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71 namespace=k8s.io Jul 7 05:46:12.264647 containerd[1443]: time="2025-07-07T05:46:12.264648066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:46:13.078980 kubelet[2466]: E0707 05:46:13.078937 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:13.078980 kubelet[2466]: E0707 05:46:13.079025 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:13.088972 containerd[1443]: time="2025-07-07T05:46:13.082970466Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 05:46:13.097033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421060426.mount: Deactivated successfully. Jul 7 05:46:13.099654 kubelet[2466]: I0707 05:46:13.099603 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lzphd" podStartSLOduration=2.4246521850000002 podStartE2EDuration="18.099585309s" podCreationTimestamp="2025-07-07 05:45:55 +0000 UTC" firstStartedPulling="2025-07-07 05:45:56.161035402 +0000 UTC m=+6.256624763" lastFinishedPulling="2025-07-07 05:46:11.835968526 +0000 UTC m=+21.931557887" observedRunningTime="2025-07-07 05:46:12.119879699 +0000 UTC m=+22.215469060" watchObservedRunningTime="2025-07-07 05:46:13.099585309 +0000 UTC m=+23.195174670" Jul 7 05:46:13.101289 containerd[1443]: time="2025-07-07T05:46:13.101240744Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\"" Jul 7 05:46:13.101870 containerd[1443]: time="2025-07-07T05:46:13.101713526Z" level=info msg="StartContainer for \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\"" Jul 7 05:46:13.127869 systemd[1]: Started cri-containerd-b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec.scope - libcontainer container b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec. Jul 7 05:46:13.149855 systemd[1]: cri-containerd-b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec.scope: Deactivated successfully. Jul 7 05:46:13.151530 containerd[1443]: time="2025-07-07T05:46:13.151485690Z" level=info msg="StartContainer for \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\" returns successfully" Jul 7 05:46:13.177970 containerd[1443]: time="2025-07-07T05:46:13.177886221Z" level=info msg="shim disconnected" id=b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec namespace=k8s.io Jul 7 05:46:13.177970 containerd[1443]: time="2025-07-07T05:46:13.177954784Z" level=warning msg="cleaning up after shim disconnected" id=b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec namespace=k8s.io Jul 7 05:46:13.177970 containerd[1443]: time="2025-07-07T05:46:13.177963504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:46:13.261337 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec-rootfs.mount: Deactivated successfully. Jul 7 05:46:14.082898 kubelet[2466]: E0707 05:46:14.082818 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:14.087138 containerd[1443]: time="2025-07-07T05:46:14.087082335Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 05:46:14.110652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2279223388.mount: Deactivated successfully. Jul 7 05:46:14.112016 containerd[1443]: time="2025-07-07T05:46:14.111954710Z" level=info msg="CreateContainer within sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\"" Jul 7 05:46:14.112850 containerd[1443]: time="2025-07-07T05:46:14.112517495Z" level=info msg="StartContainer for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\"" Jul 7 05:46:14.144926 systemd[1]: Started cri-containerd-8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0.scope - libcontainer container 8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0. Jul 7 05:46:14.170459 containerd[1443]: time="2025-07-07T05:46:14.170418885Z" level=info msg="StartContainer for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" returns successfully" Jul 7 05:46:14.358675 kubelet[2466]: I0707 05:46:14.358551 2466 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 05:46:14.398004 systemd[1]: Created slice kubepods-burstable-pod43949d8a_719f_4cb8_a467_afe96265951b.slice - libcontainer container kubepods-burstable-pod43949d8a_719f_4cb8_a467_afe96265951b.slice. Jul 7 05:46:14.404808 systemd[1]: Created slice kubepods-burstable-pod86fe7218_c1d1_4843_9441_4b5ff29c29fa.slice - libcontainer container kubepods-burstable-pod86fe7218_c1d1_4843_9441_4b5ff29c29fa.slice. Jul 7 05:46:14.445656 kubelet[2466]: I0707 05:46:14.445599 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86fe7218-c1d1-4843-9441-4b5ff29c29fa-config-volume\") pod \"coredns-7c65d6cfc9-jdfw4\" (UID: \"86fe7218-c1d1-4843-9441-4b5ff29c29fa\") " pod="kube-system/coredns-7c65d6cfc9-jdfw4" Jul 7 05:46:14.445656 kubelet[2466]: I0707 05:46:14.445644 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5g5\" (UniqueName: \"kubernetes.io/projected/86fe7218-c1d1-4843-9441-4b5ff29c29fa-kube-api-access-9h5g5\") pod \"coredns-7c65d6cfc9-jdfw4\" (UID: \"86fe7218-c1d1-4843-9441-4b5ff29c29fa\") " pod="kube-system/coredns-7c65d6cfc9-jdfw4" Jul 7 05:46:14.445914 kubelet[2466]: I0707 05:46:14.445683 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43949d8a-719f-4cb8-a467-afe96265951b-config-volume\") pod \"coredns-7c65d6cfc9-n7d2w\" (UID: \"43949d8a-719f-4cb8-a467-afe96265951b\") " pod="kube-system/coredns-7c65d6cfc9-n7d2w" Jul 7 05:46:14.445914 kubelet[2466]: I0707 05:46:14.445721 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbln\" (UniqueName: \"kubernetes.io/projected/43949d8a-719f-4cb8-a467-afe96265951b-kube-api-access-dlbln\") pod \"coredns-7c65d6cfc9-n7d2w\" (UID: \"43949d8a-719f-4cb8-a467-afe96265951b\") " pod="kube-system/coredns-7c65d6cfc9-n7d2w" Jul 7 05:46:14.703695 kubelet[2466]: E0707 05:46:14.703374 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:14.704973 containerd[1443]: time="2025-07-07T05:46:14.704580764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n7d2w,Uid:43949d8a-719f-4cb8-a467-afe96265951b,Namespace:kube-system,Attempt:0,}" Jul 7 05:46:14.706794 kubelet[2466]: E0707 05:46:14.706623 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:14.707140 containerd[1443]: time="2025-07-07T05:46:14.707014831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdfw4,Uid:86fe7218-c1d1-4843-9441-4b5ff29c29fa,Namespace:kube-system,Attempt:0,}" Jul 7 05:46:15.088516 kubelet[2466]: E0707 05:46:15.088483 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:15.102603 kubelet[2466]: I0707 05:46:15.102545 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wm4n4" podStartSLOduration=5.793689651 podStartE2EDuration="20.102519709s" podCreationTimestamp="2025-07-07 05:45:55 +0000 UTC" firstStartedPulling="2025-07-07 05:45:55.93999282 +0000 UTC m=+6.035582141" lastFinishedPulling="2025-07-07 05:46:10.248822838 +0000 UTC m=+20.344412199" observedRunningTime="2025-07-07 05:46:15.101101609 +0000 UTC m=+25.196690970" watchObservedRunningTime="2025-07-07 05:46:15.102519709 +0000 UTC m=+25.198109030" Jul 7 05:46:16.089506 kubelet[2466]: E0707 05:46:16.089143 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:16.453132 systemd-networkd[1372]: cilium_host: Link UP Jul 7 05:46:16.453980 systemd-networkd[1372]: cilium_net: Link UP Jul 7 05:46:16.454453 systemd-networkd[1372]: cilium_net: Gained carrier Jul 7 05:46:16.454685 systemd-networkd[1372]: cilium_host: Gained carrier Jul 7 05:46:16.454840 systemd-networkd[1372]: cilium_net: Gained IPv6LL Jul 7 05:46:16.454972 systemd-networkd[1372]: cilium_host: Gained IPv6LL Jul 7 05:46:16.538448 systemd-networkd[1372]: cilium_vxlan: Link UP Jul 7 05:46:16.538454 systemd-networkd[1372]: cilium_vxlan: Gained carrier Jul 7 05:46:16.834738 kernel: NET: Registered PF_ALG protocol family Jul 7 05:46:17.091291 kubelet[2466]: E0707 05:46:17.090689 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:17.401003 systemd-networkd[1372]: lxc_health: Link UP Jul 7 05:46:17.414095 systemd-networkd[1372]: lxc_health: Gained carrier Jul 7 05:46:17.842306 systemd-networkd[1372]: lxc0d70fe9da03a: Link UP Jul 7 05:46:17.842842 systemd-networkd[1372]: lxcf176efa2a770: Link UP Jul 7 05:46:17.857738 kernel: eth0: renamed from tmp5e5d9 Jul 7 05:46:17.864740 kernel: eth0: renamed from tmp7a27c Jul 7 05:46:17.869389 systemd-networkd[1372]: lxc0d70fe9da03a: Gained carrier Jul 7 05:46:17.871360 systemd-networkd[1372]: lxcf176efa2a770: Gained carrier Jul 7 05:46:18.037185 systemd-networkd[1372]: cilium_vxlan: Gained IPv6LL Jul 7 05:46:18.092242 kubelet[2466]: E0707 05:46:18.091875 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:19.101292 kubelet[2466]: E0707 05:46:19.101210 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:19.189296 systemd-networkd[1372]: lxc0d70fe9da03a: Gained IPv6LL Jul 7 05:46:19.237082 systemd[1]: Started sshd@7-10.0.0.6:22-10.0.0.1:44048.service - OpenSSH per-connection server daemon (10.0.0.1:44048). Jul 7 05:46:19.252879 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jul 7 05:46:19.275744 sshd[3696]: Accepted publickey for core from 10.0.0.1 port 44048 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:19.276604 sshd[3696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:19.281753 systemd-logind[1418]: New session 8 of user core. Jul 7 05:46:19.294753 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 05:46:19.318108 systemd-networkd[1372]: lxcf176efa2a770: Gained IPv6LL Jul 7 05:46:19.424424 sshd[3696]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:19.427891 systemd[1]: sshd@7-10.0.0.6:22-10.0.0.1:44048.service: Deactivated successfully. Jul 7 05:46:19.429412 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 05:46:19.430460 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Jul 7 05:46:19.431413 systemd-logind[1418]: Removed session 8. Jul 7 05:46:20.096148 kubelet[2466]: E0707 05:46:20.096118 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:21.364114 containerd[1443]: time="2025-07-07T05:46:21.364010502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:46:21.364114 containerd[1443]: time="2025-07-07T05:46:21.364076305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:46:21.364536 containerd[1443]: time="2025-07-07T05:46:21.364094505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:46:21.364536 containerd[1443]: time="2025-07-07T05:46:21.364206029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:46:21.372796 containerd[1443]: time="2025-07-07T05:46:21.372679637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:46:21.372796 containerd[1443]: time="2025-07-07T05:46:21.372753679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:46:21.372796 containerd[1443]: time="2025-07-07T05:46:21.372768440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:46:21.373009 containerd[1443]: time="2025-07-07T05:46:21.372849402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:46:21.397898 systemd[1]: Started cri-containerd-5e5d9605e28dfa4df7a0065b47576524a5ad7359bbf7ad6176bfd1873c1456b8.scope - libcontainer container 5e5d9605e28dfa4df7a0065b47576524a5ad7359bbf7ad6176bfd1873c1456b8. Jul 7 05:46:21.399373 systemd[1]: Started cri-containerd-7a27cfad2dc5eff2026dfa246c3a77535481fa991d0f1586f0c8b23e7c6df7a7.scope - libcontainer container 7a27cfad2dc5eff2026dfa246c3a77535481fa991d0f1586f0c8b23e7c6df7a7. Jul 7 05:46:21.410990 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 05:46:21.412676 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 05:46:21.431929 containerd[1443]: time="2025-07-07T05:46:21.431885407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-n7d2w,Uid:43949d8a-719f-4cb8-a467-afe96265951b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a27cfad2dc5eff2026dfa246c3a77535481fa991d0f1586f0c8b23e7c6df7a7\"" Jul 7 05:46:21.432996 kubelet[2466]: E0707 05:46:21.432937 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:21.437683 containerd[1443]: time="2025-07-07T05:46:21.437642362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-jdfw4,Uid:86fe7218-c1d1-4843-9441-4b5ff29c29fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e5d9605e28dfa4df7a0065b47576524a5ad7359bbf7ad6176bfd1873c1456b8\"" Jul 7 05:46:21.438414 containerd[1443]: time="2025-07-07T05:46:21.438353747Z" level=info msg="CreateContainer within sandbox \"7a27cfad2dc5eff2026dfa246c3a77535481fa991d0f1586f0c8b23e7c6df7a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:46:21.440433 kubelet[2466]: E0707 05:46:21.439905 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:21.443266 containerd[1443]: time="2025-07-07T05:46:21.443119628Z" level=info msg="CreateContainer within sandbox \"5e5d9605e28dfa4df7a0065b47576524a5ad7359bbf7ad6176bfd1873c1456b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 05:46:21.454406 containerd[1443]: time="2025-07-07T05:46:21.454357290Z" level=info msg="CreateContainer within sandbox \"7a27cfad2dc5eff2026dfa246c3a77535481fa991d0f1586f0c8b23e7c6df7a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"320a54b2de93f60b0247cb09e6ff66d7d5d8f9295015c43bae228bc1abffbc18\"" Jul 7 05:46:21.455956 containerd[1443]: time="2025-07-07T05:46:21.455047873Z" level=info msg="StartContainer for \"320a54b2de93f60b0247cb09e6ff66d7d5d8f9295015c43bae228bc1abffbc18\"" Jul 7 05:46:21.461816 containerd[1443]: time="2025-07-07T05:46:21.461773382Z" level=info msg="CreateContainer within sandbox \"5e5d9605e28dfa4df7a0065b47576524a5ad7359bbf7ad6176bfd1873c1456b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8651d3acb4cd3ed6611a9e8132681ac8390ceb6a14bf2ba529612629bb934c5d\"" Jul 7 05:46:21.462970 containerd[1443]: time="2025-07-07T05:46:21.462933501Z" level=info msg="StartContainer for \"8651d3acb4cd3ed6611a9e8132681ac8390ceb6a14bf2ba529612629bb934c5d\"" Jul 7 05:46:21.479489 systemd[1]: Started cri-containerd-320a54b2de93f60b0247cb09e6ff66d7d5d8f9295015c43bae228bc1abffbc18.scope - libcontainer container 320a54b2de93f60b0247cb09e6ff66d7d5d8f9295015c43bae228bc1abffbc18. Jul 7 05:46:21.489910 systemd[1]: Started cri-containerd-8651d3acb4cd3ed6611a9e8132681ac8390ceb6a14bf2ba529612629bb934c5d.scope - libcontainer container 8651d3acb4cd3ed6611a9e8132681ac8390ceb6a14bf2ba529612629bb934c5d. Jul 7 05:46:21.512368 containerd[1443]: time="2025-07-07T05:46:21.509036906Z" level=info msg="StartContainer for \"320a54b2de93f60b0247cb09e6ff66d7d5d8f9295015c43bae228bc1abffbc18\" returns successfully" Jul 7 05:46:21.519034 containerd[1443]: time="2025-07-07T05:46:21.518997285Z" level=info msg="StartContainer for \"8651d3acb4cd3ed6611a9e8132681ac8390ceb6a14bf2ba529612629bb934c5d\" returns successfully" Jul 7 05:46:22.101677 kubelet[2466]: E0707 05:46:22.100332 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:22.103441 kubelet[2466]: E0707 05:46:22.103389 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:22.111745 kubelet[2466]: I0707 05:46:22.111681 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-jdfw4" podStartSLOduration=27.111669606 podStartE2EDuration="27.111669606s" podCreationTimestamp="2025-07-07 05:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:46:22.111042945 +0000 UTC m=+32.206632386" watchObservedRunningTime="2025-07-07 05:46:22.111669606 +0000 UTC m=+32.207258927" Jul 7 05:46:22.123274 kubelet[2466]: I0707 05:46:22.123216 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-n7d2w" podStartSLOduration=27.123198584 podStartE2EDuration="27.123198584s" podCreationTimestamp="2025-07-07 05:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:46:22.122301715 +0000 UTC m=+32.217891076" watchObservedRunningTime="2025-07-07 05:46:22.123198584 +0000 UTC m=+32.218787945" Jul 7 05:46:23.105185 kubelet[2466]: E0707 05:46:23.105149 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:23.106464 kubelet[2466]: E0707 05:46:23.105245 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:24.107861 kubelet[2466]: E0707 05:46:24.107484 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:24.107861 kubelet[2466]: E0707 05:46:24.107792 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:46:24.440099 systemd[1]: Started sshd@8-10.0.0.6:22-10.0.0.1:41234.service - OpenSSH per-connection server daemon (10.0.0.1:41234). Jul 7 05:46:24.479005 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 41234 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:24.480460 sshd[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:24.484452 systemd-logind[1418]: New session 9 of user core. Jul 7 05:46:24.494984 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 05:46:24.614661 sshd[3889]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:24.618645 systemd[1]: sshd@8-10.0.0.6:22-10.0.0.1:41234.service: Deactivated successfully. Jul 7 05:46:24.621597 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 05:46:24.623102 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Jul 7 05:46:24.623961 systemd-logind[1418]: Removed session 9. Jul 7 05:46:29.636048 systemd[1]: Started sshd@9-10.0.0.6:22-10.0.0.1:41250.service - OpenSSH per-connection server daemon (10.0.0.1:41250). Jul 7 05:46:29.666343 sshd[3906]: Accepted publickey for core from 10.0.0.1 port 41250 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:29.667624 sshd[3906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:29.671052 systemd-logind[1418]: New session 10 of user core. Jul 7 05:46:29.682880 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 05:46:29.804077 sshd[3906]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:29.807856 systemd[1]: sshd@9-10.0.0.6:22-10.0.0.1:41250.service: Deactivated successfully. Jul 7 05:46:29.810453 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 05:46:29.811105 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Jul 7 05:46:29.812207 systemd-logind[1418]: Removed session 10. Jul 7 05:46:34.819066 systemd[1]: Started sshd@10-10.0.0.6:22-10.0.0.1:37060.service - OpenSSH per-connection server daemon (10.0.0.1:37060). Jul 7 05:46:34.852979 sshd[3923]: Accepted publickey for core from 10.0.0.1 port 37060 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:34.854213 sshd[3923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:34.858079 systemd-logind[1418]: New session 11 of user core. Jul 7 05:46:34.868852 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 05:46:34.975238 sshd[3923]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:34.985246 systemd[1]: sshd@10-10.0.0.6:22-10.0.0.1:37060.service: Deactivated successfully. Jul 7 05:46:34.986681 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 05:46:34.988581 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Jul 7 05:46:34.997081 systemd[1]: Started sshd@11-10.0.0.6:22-10.0.0.1:37074.service - OpenSSH per-connection server daemon (10.0.0.1:37074). Jul 7 05:46:34.997987 systemd-logind[1418]: Removed session 11. Jul 7 05:46:35.026100 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 37074 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:35.027411 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:35.031160 systemd-logind[1418]: New session 12 of user core. Jul 7 05:46:35.040868 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 05:46:35.189973 sshd[3938]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:35.203996 systemd[1]: sshd@11-10.0.0.6:22-10.0.0.1:37074.service: Deactivated successfully. Jul 7 05:46:35.207865 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 05:46:35.213461 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Jul 7 05:46:35.223185 systemd[1]: Started sshd@12-10.0.0.6:22-10.0.0.1:37082.service - OpenSSH per-connection server daemon (10.0.0.1:37082). Jul 7 05:46:35.225002 systemd-logind[1418]: Removed session 12. Jul 7 05:46:35.255255 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 37082 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:35.256660 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:35.261358 systemd-logind[1418]: New session 13 of user core. Jul 7 05:46:35.268872 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 05:46:35.377764 sshd[3951]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:35.381624 systemd[1]: sshd@12-10.0.0.6:22-10.0.0.1:37082.service: Deactivated successfully. Jul 7 05:46:35.383336 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 05:46:35.383899 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Jul 7 05:46:35.384941 systemd-logind[1418]: Removed session 13. Jul 7 05:46:40.388313 systemd[1]: Started sshd@13-10.0.0.6:22-10.0.0.1:37084.service - OpenSSH per-connection server daemon (10.0.0.1:37084). Jul 7 05:46:40.424649 sshd[3966]: Accepted publickey for core from 10.0.0.1 port 37084 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:40.425887 sshd[3966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:40.429669 systemd-logind[1418]: New session 14 of user core. Jul 7 05:46:40.441001 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 05:46:40.553782 sshd[3966]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:40.557483 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Jul 7 05:46:40.557820 systemd[1]: sshd@13-10.0.0.6:22-10.0.0.1:37084.service: Deactivated successfully. Jul 7 05:46:40.560288 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 05:46:40.561255 systemd-logind[1418]: Removed session 14. Jul 7 05:46:45.567107 systemd[1]: Started sshd@14-10.0.0.6:22-10.0.0.1:39042.service - OpenSSH per-connection server daemon (10.0.0.1:39042). Jul 7 05:46:45.633825 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 39042 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:45.633591 sshd[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:45.640379 systemd-logind[1418]: New session 15 of user core. Jul 7 05:46:45.651884 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 05:46:45.772264 sshd[3980]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:45.789785 systemd[1]: sshd@14-10.0.0.6:22-10.0.0.1:39042.service: Deactivated successfully. Jul 7 05:46:45.793051 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 05:46:45.795585 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Jul 7 05:46:45.796766 systemd[1]: Started sshd@15-10.0.0.6:22-10.0.0.1:39044.service - OpenSSH per-connection server daemon (10.0.0.1:39044). Jul 7 05:46:45.797554 systemd-logind[1418]: Removed session 15. Jul 7 05:46:45.830693 sshd[3995]: Accepted publickey for core from 10.0.0.1 port 39044 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:45.832133 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:45.837232 systemd-logind[1418]: New session 16 of user core. Jul 7 05:46:45.845183 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 05:46:46.107406 sshd[3995]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:46.119094 systemd[1]: sshd@15-10.0.0.6:22-10.0.0.1:39044.service: Deactivated successfully. Jul 7 05:46:46.120500 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 05:46:46.121848 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Jul 7 05:46:46.136743 systemd[1]: Started sshd@16-10.0.0.6:22-10.0.0.1:39048.service - OpenSSH per-connection server daemon (10.0.0.1:39048). Jul 7 05:46:46.137794 systemd-logind[1418]: Removed session 16. Jul 7 05:46:46.169630 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 39048 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:46.171029 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:46.175659 systemd-logind[1418]: New session 17 of user core. Jul 7 05:46:46.185864 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 05:46:47.549249 sshd[4007]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:47.560770 systemd[1]: sshd@16-10.0.0.6:22-10.0.0.1:39048.service: Deactivated successfully. Jul 7 05:46:47.562999 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 05:46:47.565770 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Jul 7 05:46:47.573929 systemd[1]: Started sshd@17-10.0.0.6:22-10.0.0.1:39060.service - OpenSSH per-connection server daemon (10.0.0.1:39060). Jul 7 05:46:47.577029 systemd-logind[1418]: Removed session 17. Jul 7 05:46:47.608870 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 39060 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:47.610399 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:47.614485 systemd-logind[1418]: New session 18 of user core. Jul 7 05:46:47.623887 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 05:46:47.842383 sshd[4027]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:47.856489 systemd[1]: sshd@17-10.0.0.6:22-10.0.0.1:39060.service: Deactivated successfully. Jul 7 05:46:47.859523 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 05:46:47.861263 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Jul 7 05:46:47.871102 systemd[1]: Started sshd@18-10.0.0.6:22-10.0.0.1:39066.service - OpenSSH per-connection server daemon (10.0.0.1:39066). Jul 7 05:46:47.872090 systemd-logind[1418]: Removed session 18. Jul 7 05:46:47.902748 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 39066 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:47.904522 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:47.908313 systemd-logind[1418]: New session 19 of user core. Jul 7 05:46:47.914881 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 05:46:48.020929 sshd[4040]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:48.024251 systemd[1]: sshd@18-10.0.0.6:22-10.0.0.1:39066.service: Deactivated successfully. Jul 7 05:46:48.026056 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 05:46:48.026674 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Jul 7 05:46:48.027568 systemd-logind[1418]: Removed session 19. Jul 7 05:46:53.031212 systemd[1]: Started sshd@19-10.0.0.6:22-10.0.0.1:39312.service - OpenSSH per-connection server daemon (10.0.0.1:39312). Jul 7 05:46:53.063385 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 39312 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:53.064621 sshd[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:53.068627 systemd-logind[1418]: New session 20 of user core. Jul 7 05:46:53.082857 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 05:46:53.191941 sshd[4060]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:53.195776 systemd[1]: sshd@19-10.0.0.6:22-10.0.0.1:39312.service: Deactivated successfully. Jul 7 05:46:53.197506 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 05:46:53.198251 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Jul 7 05:46:53.199029 systemd-logind[1418]: Removed session 20. Jul 7 05:46:58.205403 systemd[1]: Started sshd@20-10.0.0.6:22-10.0.0.1:39318.service - OpenSSH per-connection server daemon (10.0.0.1:39318). Jul 7 05:46:58.236796 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 39318 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:46:58.238037 sshd[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:46:58.242072 systemd-logind[1418]: New session 21 of user core. Jul 7 05:46:58.252864 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 05:46:58.359827 sshd[4077]: pam_unix(sshd:session): session closed for user core Jul 7 05:46:58.363909 systemd[1]: sshd@20-10.0.0.6:22-10.0.0.1:39318.service: Deactivated successfully. Jul 7 05:46:58.365523 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 05:46:58.366167 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Jul 7 05:46:58.367172 systemd-logind[1418]: Removed session 21. Jul 7 05:47:03.372730 systemd[1]: Started sshd@21-10.0.0.6:22-10.0.0.1:60324.service - OpenSSH per-connection server daemon (10.0.0.1:60324). Jul 7 05:47:03.411658 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 60324 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:47:03.412585 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:47:03.416558 systemd-logind[1418]: New session 22 of user core. Jul 7 05:47:03.432911 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 05:47:03.540902 sshd[4092]: pam_unix(sshd:session): session closed for user core Jul 7 05:47:03.548254 systemd[1]: sshd@21-10.0.0.6:22-10.0.0.1:60324.service: Deactivated successfully. Jul 7 05:47:03.549971 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 05:47:03.551750 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Jul 7 05:47:03.562033 systemd[1]: Started sshd@22-10.0.0.6:22-10.0.0.1:60328.service - OpenSSH per-connection server daemon (10.0.0.1:60328). Jul 7 05:47:03.563679 systemd-logind[1418]: Removed session 22. Jul 7 05:47:03.593047 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 60328 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:47:03.594791 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:47:03.598454 systemd-logind[1418]: New session 23 of user core. Jul 7 05:47:03.604870 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 05:47:05.157814 containerd[1443]: time="2025-07-07T05:47:05.157659696Z" level=info msg="StopContainer for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" with timeout 30 (s)" Jul 7 05:47:05.158444 containerd[1443]: time="2025-07-07T05:47:05.158316813Z" level=info msg="Stop container \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" with signal terminated" Jul 7 05:47:05.167140 systemd[1]: cri-containerd-8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb.scope: Deactivated successfully. Jul 7 05:47:05.189488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb-rootfs.mount: Deactivated successfully. Jul 7 05:47:05.200390 containerd[1443]: time="2025-07-07T05:47:05.200323663Z" level=info msg="shim disconnected" id=8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb namespace=k8s.io Jul 7 05:47:05.200390 containerd[1443]: time="2025-07-07T05:47:05.200384423Z" level=warning msg="cleaning up after shim disconnected" id=8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb namespace=k8s.io Jul 7 05:47:05.200390 containerd[1443]: time="2025-07-07T05:47:05.200397423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:05.202782 containerd[1443]: time="2025-07-07T05:47:05.202748852Z" level=info msg="StopContainer for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" with timeout 2 (s)" Jul 7 05:47:05.203013 containerd[1443]: time="2025-07-07T05:47:05.202989131Z" level=info msg="Stop container \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" with signal terminated" Jul 7 05:47:05.206255 containerd[1443]: time="2025-07-07T05:47:05.206215116Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 05:47:05.207853 systemd-networkd[1372]: lxc_health: Link DOWN Jul 7 05:47:05.207858 systemd-networkd[1372]: lxc_health: Lost carrier Jul 7 05:47:05.240411 systemd[1]: cri-containerd-8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0.scope: Deactivated successfully. Jul 7 05:47:05.240679 systemd[1]: cri-containerd-8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0.scope: Consumed 6.493s CPU time. Jul 7 05:47:05.245693 containerd[1443]: time="2025-07-07T05:47:05.245636418Z" level=info msg="StopContainer for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" returns successfully" Jul 7 05:47:05.246184 containerd[1443]: time="2025-07-07T05:47:05.246159056Z" level=info msg="StopPodSandbox for \"159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291\"" Jul 7 05:47:05.246224 containerd[1443]: time="2025-07-07T05:47:05.246192616Z" level=info msg="Container to stop \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:47:05.249940 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291-shm.mount: Deactivated successfully. Jul 7 05:47:05.255435 systemd[1]: cri-containerd-159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291.scope: Deactivated successfully. Jul 7 05:47:05.260121 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0-rootfs.mount: Deactivated successfully. Jul 7 05:47:05.274494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291-rootfs.mount: Deactivated successfully. Jul 7 05:47:05.274967 containerd[1443]: time="2025-07-07T05:47:05.274792606Z" level=info msg="shim disconnected" id=8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0 namespace=k8s.io Jul 7 05:47:05.274967 containerd[1443]: time="2025-07-07T05:47:05.274856366Z" level=warning msg="cleaning up after shim disconnected" id=8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0 namespace=k8s.io Jul 7 05:47:05.274967 containerd[1443]: time="2025-07-07T05:47:05.274865166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:05.274967 containerd[1443]: time="2025-07-07T05:47:05.274924726Z" level=info msg="shim disconnected" id=159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291 namespace=k8s.io Jul 7 05:47:05.275111 containerd[1443]: time="2025-07-07T05:47:05.274969405Z" level=warning msg="cleaning up after shim disconnected" id=159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291 namespace=k8s.io Jul 7 05:47:05.275111 containerd[1443]: time="2025-07-07T05:47:05.274978405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:05.285989 containerd[1443]: time="2025-07-07T05:47:05.285949476Z" level=info msg="TearDown network for sandbox \"159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291\" successfully" Jul 7 05:47:05.285989 containerd[1443]: time="2025-07-07T05:47:05.285983996Z" level=info msg="StopPodSandbox for \"159bff5b09b5dead6a4ad7d9ba62b48dae3a2a298f745b494afa5536add58291\" returns successfully" Jul 7 05:47:05.293621 containerd[1443]: time="2025-07-07T05:47:05.293548401Z" level=info msg="StopContainer for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" returns successfully" Jul 7 05:47:05.294336 containerd[1443]: time="2025-07-07T05:47:05.294305238Z" level=info msg="StopPodSandbox for \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\"" Jul 7 05:47:05.294425 containerd[1443]: time="2025-07-07T05:47:05.294348278Z" level=info msg="Container to stop \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:47:05.294425 containerd[1443]: time="2025-07-07T05:47:05.294409598Z" level=info msg="Container to stop \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:47:05.294473 containerd[1443]: time="2025-07-07T05:47:05.294428117Z" level=info msg="Container to stop \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:47:05.294473 containerd[1443]: time="2025-07-07T05:47:05.294437797Z" level=info msg="Container to stop \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:47:05.294517 containerd[1443]: time="2025-07-07T05:47:05.294471117Z" level=info msg="Container to stop \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 05:47:05.300167 systemd[1]: cri-containerd-19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2.scope: Deactivated successfully. Jul 7 05:47:05.321073 containerd[1443]: time="2025-07-07T05:47:05.321002637Z" level=info msg="shim disconnected" id=19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2 namespace=k8s.io Jul 7 05:47:05.321073 containerd[1443]: time="2025-07-07T05:47:05.321062917Z" level=warning msg="cleaning up after shim disconnected" id=19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2 namespace=k8s.io Jul 7 05:47:05.321073 containerd[1443]: time="2025-07-07T05:47:05.321072797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:05.335184 containerd[1443]: time="2025-07-07T05:47:05.335129493Z" level=info msg="TearDown network for sandbox \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" successfully" Jul 7 05:47:05.335184 containerd[1443]: time="2025-07-07T05:47:05.335174133Z" level=info msg="StopPodSandbox for \"19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2\" returns successfully" Jul 7 05:47:05.463643 kubelet[2466]: I0707 05:47:05.463221 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e1905be-e755-47a1-9f5a-a1168964b3b2-clustermesh-secrets\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.463643 kubelet[2466]: I0707 05:47:05.463265 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-cgroup\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.463643 kubelet[2466]: I0707 05:47:05.463285 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnhk5\" (UniqueName: \"kubernetes.io/projected/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-kube-api-access-bnhk5\") pod \"9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419\" (UID: \"9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419\") " Jul 7 05:47:05.463643 kubelet[2466]: I0707 05:47:05.463307 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-cilium-config-path\") pod \"9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419\" (UID: \"9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419\") " Jul 7 05:47:05.463643 kubelet[2466]: I0707 05:47:05.463324 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-kernel\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.463643 kubelet[2466]: I0707 05:47:05.463339 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-run\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464246 kubelet[2466]: I0707 05:47:05.463352 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cni-path\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464246 kubelet[2466]: I0707 05:47:05.463367 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-hubble-tls\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464246 kubelet[2466]: I0707 05:47:05.463407 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-xtables-lock\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464246 kubelet[2466]: I0707 05:47:05.463422 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-hostproc\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464246 kubelet[2466]: I0707 05:47:05.463435 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-net\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464246 kubelet[2466]: I0707 05:47:05.463449 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-bpf-maps\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464377 kubelet[2466]: I0707 05:47:05.463462 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-lib-modules\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464377 kubelet[2466]: I0707 05:47:05.463479 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf8rw\" (UniqueName: \"kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-kube-api-access-rf8rw\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464377 kubelet[2466]: I0707 05:47:05.463496 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-config-path\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.464377 kubelet[2466]: I0707 05:47:05.463511 2466 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-etc-cni-netd\") pod \"3e1905be-e755-47a1-9f5a-a1168964b3b2\" (UID: \"3e1905be-e755-47a1-9f5a-a1168964b3b2\") " Jul 7 05:47:05.472273 kubelet[2466]: I0707 05:47:05.471927 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.472273 kubelet[2466]: I0707 05:47:05.471930 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419" (UID: "9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:47:05.472273 kubelet[2466]: I0707 05:47:05.472020 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.472273 kubelet[2466]: I0707 05:47:05.472040 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.472273 kubelet[2466]: I0707 05:47:05.472059 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.472499 kubelet[2466]: I0707 05:47:05.472074 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.472499 kubelet[2466]: I0707 05:47:05.472424 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:47:05.472499 kubelet[2466]: I0707 05:47:05.472466 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.472499 kubelet[2466]: I0707 05:47:05.472481 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.474099 kubelet[2466]: I0707 05:47:05.474062 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-kube-api-access-rf8rw" (OuterVolumeSpecName: "kube-api-access-rf8rw") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "kube-api-access-rf8rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:47:05.474159 kubelet[2466]: I0707 05:47:05.474117 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.474159 kubelet[2466]: I0707 05:47:05.474137 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.474265 kubelet[2466]: I0707 05:47:05.474231 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 7 05:47:05.474782 kubelet[2466]: I0707 05:47:05.474406 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 05:47:05.476611 kubelet[2466]: I0707 05:47:05.476565 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-kube-api-access-bnhk5" (OuterVolumeSpecName: "kube-api-access-bnhk5") pod "9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419" (UID: "9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419"). InnerVolumeSpecName "kube-api-access-bnhk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 05:47:05.477088 kubelet[2466]: I0707 05:47:05.477036 2466 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1905be-e755-47a1-9f5a-a1168964b3b2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e1905be-e755-47a1-9f5a-a1168964b3b2" (UID: "3e1905be-e755-47a1-9f5a-a1168964b3b2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564363 2466 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564397 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564407 2466 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564419 2466 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564427 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564437 2466 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564444 2466 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564479 kubelet[2466]: I0707 05:47:05.564452 2466 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564459 2466 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564475 2466 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf8rw\" (UniqueName: \"kubernetes.io/projected/3e1905be-e755-47a1-9f5a-a1168964b3b2-kube-api-access-rf8rw\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564486 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564499 2466 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564511 2466 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564523 2466 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e1905be-e755-47a1-9f5a-a1168964b3b2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564537 2466 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e1905be-e755-47a1-9f5a-a1168964b3b2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:05.564874 kubelet[2466]: I0707 05:47:05.564553 2466 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnhk5\" (UniqueName: \"kubernetes.io/projected/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419-kube-api-access-bnhk5\") on node \"localhost\" DevicePath \"\"" Jul 7 05:47:06.005928 systemd[1]: Removed slice kubepods-besteffort-pod9b0ad1a5_0fe9_40ff_bc11_56cbbe3ff419.slice - libcontainer container kubepods-besteffort-pod9b0ad1a5_0fe9_40ff_bc11_56cbbe3ff419.slice. Jul 7 05:47:06.006858 systemd[1]: Removed slice kubepods-burstable-pod3e1905be_e755_47a1_9f5a_a1168964b3b2.slice - libcontainer container kubepods-burstable-pod3e1905be_e755_47a1_9f5a_a1168964b3b2.slice. Jul 7 05:47:06.006936 systemd[1]: kubepods-burstable-pod3e1905be_e755_47a1_9f5a_a1168964b3b2.slice: Consumed 6.637s CPU time. Jul 7 05:47:06.177104 systemd[1]: var-lib-kubelet-pods-9b0ad1a5\x2d0fe9\x2d40ff\x2dbc11\x2d56cbbe3ff419-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbnhk5.mount: Deactivated successfully. Jul 7 05:47:06.177217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2-rootfs.mount: Deactivated successfully. Jul 7 05:47:06.177274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19e7340a16355efd7cd68f7784e03629c1f3eab3a02b605bbad30e0e1904ece2-shm.mount: Deactivated successfully. Jul 7 05:47:06.177335 systemd[1]: var-lib-kubelet-pods-3e1905be\x2de755\x2d47a1\x2d9f5a\x2da1168964b3b2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drf8rw.mount: Deactivated successfully. Jul 7 05:47:06.177381 systemd[1]: var-lib-kubelet-pods-3e1905be\x2de755\x2d47a1\x2d9f5a\x2da1168964b3b2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 7 05:47:06.177430 systemd[1]: var-lib-kubelet-pods-3e1905be\x2de755\x2d47a1\x2d9f5a\x2da1168964b3b2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 7 05:47:06.246592 kubelet[2466]: I0707 05:47:06.246559 2466 scope.go:117] "RemoveContainer" containerID="8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0" Jul 7 05:47:06.248445 containerd[1443]: time="2025-07-07T05:47:06.248403060Z" level=info msg="RemoveContainer for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\"" Jul 7 05:47:06.253192 containerd[1443]: time="2025-07-07T05:47:06.253154761Z" level=info msg="RemoveContainer for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" returns successfully" Jul 7 05:47:06.253670 kubelet[2466]: I0707 05:47:06.253648 2466 scope.go:117] "RemoveContainer" containerID="b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec" Jul 7 05:47:06.254830 containerd[1443]: time="2025-07-07T05:47:06.254804834Z" level=info msg="RemoveContainer for \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\"" Jul 7 05:47:06.259567 containerd[1443]: time="2025-07-07T05:47:06.259476776Z" level=info msg="RemoveContainer for \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\" returns successfully" Jul 7 05:47:06.259764 kubelet[2466]: I0707 05:47:06.259724 2466 scope.go:117] "RemoveContainer" containerID="e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71" Jul 7 05:47:06.261426 containerd[1443]: time="2025-07-07T05:47:06.261189649Z" level=info msg="RemoveContainer for \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\"" Jul 7 05:47:06.264195 containerd[1443]: time="2025-07-07T05:47:06.264137917Z" level=info msg="RemoveContainer for \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\" returns successfully" Jul 7 05:47:06.264419 kubelet[2466]: I0707 05:47:06.264390 2466 scope.go:117] "RemoveContainer" containerID="874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3" Jul 7 05:47:06.267575 containerd[1443]: time="2025-07-07T05:47:06.267035586Z" level=info msg="RemoveContainer for \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\"" Jul 7 05:47:06.271021 containerd[1443]: time="2025-07-07T05:47:06.270061174Z" level=info msg="RemoveContainer for \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\" returns successfully" Jul 7 05:47:06.271087 kubelet[2466]: I0707 05:47:06.270427 2466 scope.go:117] "RemoveContainer" containerID="c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225" Jul 7 05:47:06.273438 containerd[1443]: time="2025-07-07T05:47:06.273331121Z" level=info msg="RemoveContainer for \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\"" Jul 7 05:47:06.278826 containerd[1443]: time="2025-07-07T05:47:06.277387145Z" level=info msg="RemoveContainer for \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\" returns successfully" Jul 7 05:47:06.279031 kubelet[2466]: I0707 05:47:06.277833 2466 scope.go:117] "RemoveContainer" containerID="8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0" Jul 7 05:47:06.279071 containerd[1443]: time="2025-07-07T05:47:06.278859659Z" level=error msg="ContainerStatus for \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\": not found" Jul 7 05:47:06.288289 kubelet[2466]: E0707 05:47:06.288249 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\": not found" containerID="8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0" Jul 7 05:47:06.288482 kubelet[2466]: I0707 05:47:06.288393 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0"} err="failed to get container status \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dfe68fd9234bbe03e182f0f408067dd39392e743a87ff7dc0184a42496a6fb0\": not found" Jul 7 05:47:06.288573 kubelet[2466]: I0707 05:47:06.288560 2466 scope.go:117] "RemoveContainer" containerID="b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec" Jul 7 05:47:06.288925 containerd[1443]: time="2025-07-07T05:47:06.288875779Z" level=error msg="ContainerStatus for \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\": not found" Jul 7 05:47:06.289029 kubelet[2466]: E0707 05:47:06.289007 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\": not found" containerID="b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec" Jul 7 05:47:06.289070 kubelet[2466]: I0707 05:47:06.289034 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec"} err="failed to get container status \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4004103470ca4fa01779eb9cd948d020e333c8b883e99552619403dbd6352ec\": not found" Jul 7 05:47:06.289070 kubelet[2466]: I0707 05:47:06.289051 2466 scope.go:117] "RemoveContainer" containerID="e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71" Jul 7 05:47:06.289383 containerd[1443]: time="2025-07-07T05:47:06.289279818Z" level=error msg="ContainerStatus for \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\": not found" Jul 7 05:47:06.289464 kubelet[2466]: E0707 05:47:06.289399 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\": not found" containerID="e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71" Jul 7 05:47:06.289464 kubelet[2466]: I0707 05:47:06.289413 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71"} err="failed to get container status \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\": rpc error: code = NotFound desc = an error occurred when try to find container \"e5c48b0cd1326fb0de2ec3bf45387f16ce9264ff7e7dd331621124fa3c886f71\": not found" Jul 7 05:47:06.289464 kubelet[2466]: I0707 05:47:06.289424 2466 scope.go:117] "RemoveContainer" containerID="874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3" Jul 7 05:47:06.289733 containerd[1443]: time="2025-07-07T05:47:06.289562816Z" level=error msg="ContainerStatus for \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\": not found" Jul 7 05:47:06.290002 kubelet[2466]: E0707 05:47:06.289846 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\": not found" containerID="874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3" Jul 7 05:47:06.290002 kubelet[2466]: I0707 05:47:06.289872 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3"} err="failed to get container status \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"874f8553ab0b423dfba792b6180d14a196277548c78769ebc31c2ea22d89b4c3\": not found" Jul 7 05:47:06.290002 kubelet[2466]: I0707 05:47:06.289887 2466 scope.go:117] "RemoveContainer" containerID="c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225" Jul 7 05:47:06.290111 containerd[1443]: time="2025-07-07T05:47:06.290085934Z" level=error msg="ContainerStatus for \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\": not found" Jul 7 05:47:06.290268 kubelet[2466]: E0707 05:47:06.290244 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\": not found" containerID="c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225" Jul 7 05:47:06.290310 kubelet[2466]: I0707 05:47:06.290294 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225"} err="failed to get container status \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\": rpc error: code = NotFound desc = an error occurred when try to find container \"c18c93d0657a624abf915a0e8e3a4ceba9af99eb7defffa486e8dd8d8dbd1225\": not found" Jul 7 05:47:06.290340 kubelet[2466]: I0707 05:47:06.290312 2466 scope.go:117] "RemoveContainer" containerID="8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb" Jul 7 05:47:06.291648 containerd[1443]: time="2025-07-07T05:47:06.291383289Z" level=info msg="RemoveContainer for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\"" Jul 7 05:47:06.295892 containerd[1443]: time="2025-07-07T05:47:06.295853271Z" level=info msg="RemoveContainer for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" returns successfully" Jul 7 05:47:06.296287 kubelet[2466]: I0707 05:47:06.296177 2466 scope.go:117] "RemoveContainer" containerID="8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb" Jul 7 05:47:06.296434 containerd[1443]: time="2025-07-07T05:47:06.296396949Z" level=error msg="ContainerStatus for \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\": not found" Jul 7 05:47:06.296546 kubelet[2466]: E0707 05:47:06.296523 2466 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\": not found" containerID="8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb" Jul 7 05:47:06.296580 kubelet[2466]: I0707 05:47:06.296554 2466 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb"} err="failed to get container status \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d7087ea1ca5fbe287f05a9104d946e9aa48b8e3800fc85fd955b2a6f9663dbb\": not found" Jul 7 05:47:07.122486 sshd[4107]: pam_unix(sshd:session): session closed for user core Jul 7 05:47:07.136229 systemd[1]: sshd@22-10.0.0.6:22-10.0.0.1:60328.service: Deactivated successfully. Jul 7 05:47:07.137733 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 05:47:07.138858 systemd-logind[1418]: Session 23 logged out. Waiting for processes to exit. Jul 7 05:47:07.140131 systemd[1]: Started sshd@23-10.0.0.6:22-10.0.0.1:60332.service - OpenSSH per-connection server daemon (10.0.0.1:60332). Jul 7 05:47:07.140877 systemd-logind[1418]: Removed session 23. Jul 7 05:47:07.172385 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 60332 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:47:07.173637 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:47:07.177657 systemd-logind[1418]: New session 24 of user core. Jul 7 05:47:07.188835 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 05:47:08.000661 kubelet[2466]: I0707 05:47:08.000599 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" path="/var/lib/kubelet/pods/3e1905be-e755-47a1-9f5a-a1168964b3b2/volumes" Jul 7 05:47:08.001234 kubelet[2466]: I0707 05:47:08.001197 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419" path="/var/lib/kubelet/pods/9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419/volumes" Jul 7 05:47:08.295958 sshd[4268]: pam_unix(sshd:session): session closed for user core Jul 7 05:47:08.308591 systemd[1]: sshd@23-10.0.0.6:22-10.0.0.1:60332.service: Deactivated successfully. Jul 7 05:47:08.313849 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 05:47:08.314007 systemd[1]: session-24.scope: Consumed 1.034s CPU time. Jul 7 05:47:08.315085 systemd-logind[1418]: Session 24 logged out. Waiting for processes to exit. Jul 7 05:47:08.323444 kubelet[2466]: E0707 05:47:08.323387 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" containerName="mount-cgroup" Jul 7 05:47:08.323444 kubelet[2466]: E0707 05:47:08.323419 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419" containerName="cilium-operator" Jul 7 05:47:08.323444 kubelet[2466]: E0707 05:47:08.323426 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" containerName="clean-cilium-state" Jul 7 05:47:08.323444 kubelet[2466]: E0707 05:47:08.323433 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" containerName="mount-bpf-fs" Jul 7 05:47:08.323444 kubelet[2466]: E0707 05:47:08.323439 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" containerName="cilium-agent" Jul 7 05:47:08.323444 kubelet[2466]: E0707 05:47:08.323445 2466 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" containerName="apply-sysctl-overwrites" Jul 7 05:47:08.323727 kubelet[2466]: I0707 05:47:08.323467 2466 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0ad1a5-0fe9-40ff-bc11-56cbbe3ff419" containerName="cilium-operator" Jul 7 05:47:08.323727 kubelet[2466]: I0707 05:47:08.323474 2466 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1905be-e755-47a1-9f5a-a1168964b3b2" containerName="cilium-agent" Jul 7 05:47:08.324216 systemd[1]: Started sshd@24-10.0.0.6:22-10.0.0.1:60336.service - OpenSSH per-connection server daemon (10.0.0.1:60336). Jul 7 05:47:08.329103 systemd-logind[1418]: Removed session 24. Jul 7 05:47:08.337938 systemd[1]: Created slice kubepods-burstable-pode748e808_d5fe_4ee3_bf3f_337014fea216.slice - libcontainer container kubepods-burstable-pode748e808_d5fe_4ee3_bf3f_337014fea216.slice. Jul 7 05:47:08.374760 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 60336 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:47:08.377463 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:47:08.383518 systemd-logind[1418]: New session 25 of user core. Jul 7 05:47:08.392018 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 05:47:08.445921 sshd[4281]: pam_unix(sshd:session): session closed for user core Jul 7 05:47:08.458059 systemd[1]: sshd@24-10.0.0.6:22-10.0.0.1:60336.service: Deactivated successfully. Jul 7 05:47:08.460241 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 05:47:08.462855 systemd-logind[1418]: Session 25 logged out. Waiting for processes to exit. Jul 7 05:47:08.464434 systemd[1]: Started sshd@25-10.0.0.6:22-10.0.0.1:60340.service - OpenSSH per-connection server daemon (10.0.0.1:60340). Jul 7 05:47:08.465466 systemd-logind[1418]: Removed session 25. Jul 7 05:47:08.479374 kubelet[2466]: I0707 05:47:08.479092 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-bpf-maps\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479374 kubelet[2466]: I0707 05:47:08.479135 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-lib-modules\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479374 kubelet[2466]: I0707 05:47:08.479160 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4cdk\" (UniqueName: \"kubernetes.io/projected/e748e808-d5fe-4ee3-bf3f-337014fea216-kube-api-access-n4cdk\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479374 kubelet[2466]: I0707 05:47:08.479178 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e748e808-d5fe-4ee3-bf3f-337014fea216-clustermesh-secrets\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479374 kubelet[2466]: I0707 05:47:08.479193 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-cilium-run\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479374 kubelet[2466]: I0707 05:47:08.479210 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e748e808-d5fe-4ee3-bf3f-337014fea216-cilium-config-path\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479582 kubelet[2466]: I0707 05:47:08.479223 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-cilium-cgroup\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479582 kubelet[2466]: I0707 05:47:08.479239 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-host-proc-sys-net\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479582 kubelet[2466]: I0707 05:47:08.479253 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-cni-path\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479582 kubelet[2466]: I0707 05:47:08.479268 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-etc-cni-netd\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479582 kubelet[2466]: I0707 05:47:08.479283 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e748e808-d5fe-4ee3-bf3f-337014fea216-cilium-ipsec-secrets\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479582 kubelet[2466]: I0707 05:47:08.479299 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-xtables-lock\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479714 kubelet[2466]: I0707 05:47:08.479312 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e748e808-d5fe-4ee3-bf3f-337014fea216-hubble-tls\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479714 kubelet[2466]: I0707 05:47:08.479329 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-hostproc\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.479714 kubelet[2466]: I0707 05:47:08.479344 2466 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e748e808-d5fe-4ee3-bf3f-337014fea216-host-proc-sys-kernel\") pod \"cilium-zkqtp\" (UID: \"e748e808-d5fe-4ee3-bf3f-337014fea216\") " pod="kube-system/cilium-zkqtp" Jul 7 05:47:08.496270 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 60340 ssh2: RSA SHA256:xbqw3wnLdCViXU9mTRaJT+fNoHcYvODdLgiUAA0kC78 Jul 7 05:47:08.497526 sshd[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 05:47:08.501627 systemd-logind[1418]: New session 26 of user core. Jul 7 05:47:08.508849 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 05:47:08.640471 kubelet[2466]: E0707 05:47:08.640339 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:08.641011 containerd[1443]: time="2025-07-07T05:47:08.640906263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkqtp,Uid:e748e808-d5fe-4ee3-bf3f-337014fea216,Namespace:kube-system,Attempt:0,}" Jul 7 05:47:08.690872 containerd[1443]: time="2025-07-07T05:47:08.690784878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 05:47:08.690872 containerd[1443]: time="2025-07-07T05:47:08.690834878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 05:47:08.690872 containerd[1443]: time="2025-07-07T05:47:08.690846478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:47:08.691035 containerd[1443]: time="2025-07-07T05:47:08.690915957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 05:47:08.709884 systemd[1]: Started cri-containerd-39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf.scope - libcontainer container 39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf. Jul 7 05:47:08.731219 containerd[1443]: time="2025-07-07T05:47:08.731176680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zkqtp,Uid:e748e808-d5fe-4ee3-bf3f-337014fea216,Namespace:kube-system,Attempt:0,} returns sandbox id \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\"" Jul 7 05:47:08.731948 kubelet[2466]: E0707 05:47:08.731921 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:08.733980 containerd[1443]: time="2025-07-07T05:47:08.733946672Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 7 05:47:08.744546 containerd[1443]: time="2025-07-07T05:47:08.744499882Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5\"" Jul 7 05:47:08.745080 containerd[1443]: time="2025-07-07T05:47:08.745055160Z" level=info msg="StartContainer for \"b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5\"" Jul 7 05:47:08.769885 systemd[1]: Started cri-containerd-b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5.scope - libcontainer container b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5. Jul 7 05:47:08.793616 containerd[1443]: time="2025-07-07T05:47:08.793544579Z" level=info msg="StartContainer for \"b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5\" returns successfully" Jul 7 05:47:08.807011 systemd[1]: cri-containerd-b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5.scope: Deactivated successfully. Jul 7 05:47:08.832672 containerd[1443]: time="2025-07-07T05:47:08.832614625Z" level=info msg="shim disconnected" id=b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5 namespace=k8s.io Jul 7 05:47:08.832672 containerd[1443]: time="2025-07-07T05:47:08.832664585Z" level=warning msg="cleaning up after shim disconnected" id=b7d8a72da1b6e3ac6225714754714d3494a23f3290837d5f9816c0f28937f4c5 namespace=k8s.io Jul 7 05:47:08.832672 containerd[1443]: time="2025-07-07T05:47:08.832674185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:09.259218 kubelet[2466]: E0707 05:47:09.259024 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:09.261680 containerd[1443]: time="2025-07-07T05:47:09.261095471Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 7 05:47:09.270238 containerd[1443]: time="2025-07-07T05:47:09.270202049Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078\"" Jul 7 05:47:09.270939 containerd[1443]: time="2025-07-07T05:47:09.270846847Z" level=info msg="StartContainer for \"6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078\"" Jul 7 05:47:09.297945 systemd[1]: Started cri-containerd-6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078.scope - libcontainer container 6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078. Jul 7 05:47:09.316666 containerd[1443]: time="2025-07-07T05:47:09.316627657Z" level=info msg="StartContainer for \"6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078\" returns successfully" Jul 7 05:47:09.322486 systemd[1]: cri-containerd-6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078.scope: Deactivated successfully. Jul 7 05:47:09.348756 containerd[1443]: time="2025-07-07T05:47:09.348668220Z" level=info msg="shim disconnected" id=6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078 namespace=k8s.io Jul 7 05:47:09.348756 containerd[1443]: time="2025-07-07T05:47:09.348753260Z" level=warning msg="cleaning up after shim disconnected" id=6e8ba5200cc0c2b2a1567c993e061a62ddeae4836382d12cbe858556003e8078 namespace=k8s.io Jul 7 05:47:09.348756 containerd[1443]: time="2025-07-07T05:47:09.348763180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:10.048088 kubelet[2466]: E0707 05:47:10.048043 2466 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 7 05:47:10.261892 kubelet[2466]: E0707 05:47:10.261822 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:10.264876 containerd[1443]: time="2025-07-07T05:47:10.264655268Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 7 05:47:10.282900 containerd[1443]: time="2025-07-07T05:47:10.282741354Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90\"" Jul 7 05:47:10.283414 containerd[1443]: time="2025-07-07T05:47:10.283377632Z" level=info msg="StartContainer for \"dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90\"" Jul 7 05:47:10.307853 systemd[1]: Started cri-containerd-dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90.scope - libcontainer container dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90. Jul 7 05:47:10.328554 containerd[1443]: time="2025-07-07T05:47:10.328515466Z" level=info msg="StartContainer for \"dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90\" returns successfully" Jul 7 05:47:10.331770 systemd[1]: cri-containerd-dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90.scope: Deactivated successfully. Jul 7 05:47:10.349949 containerd[1443]: time="2025-07-07T05:47:10.349886025Z" level=info msg="shim disconnected" id=dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90 namespace=k8s.io Jul 7 05:47:10.349949 containerd[1443]: time="2025-07-07T05:47:10.349943505Z" level=warning msg="cleaning up after shim disconnected" id=dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90 namespace=k8s.io Jul 7 05:47:10.349949 containerd[1443]: time="2025-07-07T05:47:10.349952545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:10.585332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc705488f53157ae8761bfee9e0d0879a7ed5808910581cb5480cb3c71856c90-rootfs.mount: Deactivated successfully. Jul 7 05:47:11.265771 kubelet[2466]: E0707 05:47:11.265737 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:11.268360 containerd[1443]: time="2025-07-07T05:47:11.267963875Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 7 05:47:11.280146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355013589.mount: Deactivated successfully. Jul 7 05:47:11.281888 containerd[1443]: time="2025-07-07T05:47:11.281855935Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0\"" Jul 7 05:47:11.283491 containerd[1443]: time="2025-07-07T05:47:11.283459213Z" level=info msg="StartContainer for \"ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0\"" Jul 7 05:47:11.310899 systemd[1]: Started cri-containerd-ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0.scope - libcontainer container ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0. Jul 7 05:47:11.328294 systemd[1]: cri-containerd-ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0.scope: Deactivated successfully. Jul 7 05:47:11.337422 containerd[1443]: time="2025-07-07T05:47:11.337327615Z" level=info msg="StartContainer for \"ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0\" returns successfully" Jul 7 05:47:11.340763 containerd[1443]: time="2025-07-07T05:47:11.333490581Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode748e808_d5fe_4ee3_bf3f_337014fea216.slice/cri-containerd-ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0.scope/memory.events\": no such file or directory" Jul 7 05:47:11.357923 containerd[1443]: time="2025-07-07T05:47:11.357869665Z" level=info msg="shim disconnected" id=ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0 namespace=k8s.io Jul 7 05:47:11.357923 containerd[1443]: time="2025-07-07T05:47:11.357922785Z" level=warning msg="cleaning up after shim disconnected" id=ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0 namespace=k8s.io Jul 7 05:47:11.358143 containerd[1443]: time="2025-07-07T05:47:11.357932065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 05:47:11.445366 kubelet[2466]: I0707 05:47:11.445231 2466 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-07T05:47:11Z","lastTransitionTime":"2025-07-07T05:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 7 05:47:11.585426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae21433acdf01999ffdeb8a8324267b55da8eefed55f7d2dd943afe914494fe0-rootfs.mount: Deactivated successfully. Jul 7 05:47:12.269461 kubelet[2466]: E0707 05:47:12.269404 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:12.270920 containerd[1443]: time="2025-07-07T05:47:12.270884395Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 7 05:47:12.286910 containerd[1443]: time="2025-07-07T05:47:12.286066060Z" level=info msg="CreateContainer within sandbox \"39142b34a8f34e9cc602b8cd872e6bd6690e77148bd29e1879969b342049afdf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7746b7b4ad4f4f6d0f30112e0d79105abad37ab25e3ae8b46cb3ee17f04a0e95\"" Jul 7 05:47:12.287647 containerd[1443]: time="2025-07-07T05:47:12.287489019Z" level=info msg="StartContainer for \"7746b7b4ad4f4f6d0f30112e0d79105abad37ab25e3ae8b46cb3ee17f04a0e95\"" Jul 7 05:47:12.312861 systemd[1]: Started cri-containerd-7746b7b4ad4f4f6d0f30112e0d79105abad37ab25e3ae8b46cb3ee17f04a0e95.scope - libcontainer container 7746b7b4ad4f4f6d0f30112e0d79105abad37ab25e3ae8b46cb3ee17f04a0e95. Jul 7 05:47:12.336894 containerd[1443]: time="2025-07-07T05:47:12.336846931Z" level=info msg="StartContainer for \"7746b7b4ad4f4f6d0f30112e0d79105abad37ab25e3ae8b46cb3ee17f04a0e95\" returns successfully" Jul 7 05:47:12.599790 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 7 05:47:13.274195 kubelet[2466]: E0707 05:47:13.274149 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:13.999059 kubelet[2466]: E0707 05:47:13.999018 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:14.641942 kubelet[2466]: E0707 05:47:14.641902 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:15.507214 systemd-networkd[1372]: lxc_health: Link UP Jul 7 05:47:15.514067 systemd-networkd[1372]: lxc_health: Gained carrier Jul 7 05:47:16.644710 kubelet[2466]: E0707 05:47:16.644650 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:16.662911 systemd-networkd[1372]: lxc_health: Gained IPv6LL Jul 7 05:47:16.663913 kubelet[2466]: I0707 05:47:16.662909 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zkqtp" podStartSLOduration=8.662891988 podStartE2EDuration="8.662891988s" podCreationTimestamp="2025-07-07 05:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 05:47:13.291912403 +0000 UTC m=+83.387501804" watchObservedRunningTime="2025-07-07 05:47:16.662891988 +0000 UTC m=+86.758481349" Jul 7 05:47:17.281410 kubelet[2466]: E0707 05:47:17.281364 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:18.282750 kubelet[2466]: E0707 05:47:18.282681 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 05:47:21.193792 sshd[4289]: pam_unix(sshd:session): session closed for user core Jul 7 05:47:21.197036 systemd[1]: sshd@25-10.0.0.6:22-10.0.0.1:60340.service: Deactivated successfully. Jul 7 05:47:21.198679 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 05:47:21.199238 systemd-logind[1418]: Session 26 logged out. Waiting for processes to exit. Jul 7 05:47:21.200021 systemd-logind[1418]: Removed session 26.