Jul 11 04:43:35.787855 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 04:43:35.787875 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jul 11 03:37:34 -00 2025 Jul 11 04:43:35.787884 kernel: KASLR enabled Jul 11 04:43:35.787889 kernel: efi: EFI v2.7 by EDK II Jul 11 04:43:35.787894 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 11 04:43:35.787900 kernel: random: crng init done Jul 11 04:43:35.787906 kernel: secureboot: Secure boot disabled Jul 11 04:43:35.787912 kernel: ACPI: Early table checksum verification disabled Jul 11 04:43:35.787917 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 11 04:43:35.787925 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 04:43:35.787931 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787936 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787942 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787948 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787955 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787962 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787968 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787974 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787980 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 04:43:35.787986 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 04:43:35.787992 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 11 04:43:35.787998 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 04:43:35.788004 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 11 04:43:35.788010 kernel: Zone ranges: Jul 11 04:43:35.788016 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 04:43:35.788023 kernel: DMA32 empty Jul 11 04:43:35.788029 kernel: Normal empty Jul 11 04:43:35.788035 kernel: Device empty Jul 11 04:43:35.788041 kernel: Movable zone start for each node Jul 11 04:43:35.788046 kernel: Early memory node ranges Jul 11 04:43:35.788052 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 11 04:43:35.788058 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 11 04:43:35.788064 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 11 04:43:35.788070 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 11 04:43:35.788076 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 11 04:43:35.788082 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 11 04:43:35.788088 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 11 04:43:35.788095 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 11 04:43:35.788101 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 11 04:43:35.788107 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 04:43:35.788116 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 04:43:35.788122 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 04:43:35.788128 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 04:43:35.788136 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 04:43:35.788142 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 04:43:35.788148 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 11 04:43:35.788155 kernel: psci: probing for conduit method from ACPI. Jul 11 04:43:35.788161 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 04:43:35.788167 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 04:43:35.788173 kernel: psci: Trusted OS migration not required Jul 11 04:43:35.788180 kernel: psci: SMC Calling Convention v1.1 Jul 11 04:43:35.788186 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 04:43:35.788192 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 11 04:43:35.788200 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 11 04:43:35.788207 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 04:43:35.788213 kernel: Detected PIPT I-cache on CPU0 Jul 11 04:43:35.788219 kernel: CPU features: detected: GIC system register CPU interface Jul 11 04:43:35.788225 kernel: CPU features: detected: Spectre-v4 Jul 11 04:43:35.788241 kernel: CPU features: detected: Spectre-BHB Jul 11 04:43:35.788248 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 04:43:35.788254 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 04:43:35.788261 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 04:43:35.788267 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 04:43:35.788273 kernel: alternatives: applying boot alternatives Jul 11 04:43:35.788280 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3897e9e5bdb5872ff4c86729cf311c0e9d40949a2432461ec9aeef8c2526e01 Jul 11 04:43:35.788288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 04:43:35.788294 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 04:43:35.788361 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 04:43:35.788369 kernel: Fallback order for Node 0: 0 Jul 11 04:43:35.788376 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 11 04:43:35.788382 kernel: Policy zone: DMA Jul 11 04:43:35.788389 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 04:43:35.788395 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 11 04:43:35.788401 kernel: software IO TLB: area num 4. Jul 11 04:43:35.788407 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 11 04:43:35.788414 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 11 04:43:35.788423 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 04:43:35.788430 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 04:43:35.788437 kernel: rcu: RCU event tracing is enabled. Jul 11 04:43:35.788444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 04:43:35.788451 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 04:43:35.788460 kernel: Tracing variant of Tasks RCU enabled. Jul 11 04:43:35.788468 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 04:43:35.788475 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 04:43:35.788482 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 04:43:35.788493 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 04:43:35.788500 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 04:43:35.788509 kernel: GICv3: 256 SPIs implemented Jul 11 04:43:35.788520 kernel: GICv3: 0 Extended SPIs implemented Jul 11 04:43:35.788531 kernel: Root IRQ handler: gic_handle_irq Jul 11 04:43:35.788547 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 04:43:35.788553 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 11 04:43:35.788559 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 04:43:35.788566 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 04:43:35.788572 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 11 04:43:35.788579 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 11 04:43:35.788586 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 11 04:43:35.788592 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 11 04:43:35.788598 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 04:43:35.788606 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 04:43:35.788612 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 04:43:35.788619 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 04:43:35.788626 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 04:43:35.788632 kernel: arm-pv: using stolen time PV Jul 11 04:43:35.788639 kernel: Console: colour dummy device 80x25 Jul 11 04:43:35.788646 kernel: ACPI: Core revision 20240827 Jul 11 04:43:35.788653 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 04:43:35.788659 kernel: pid_max: default: 32768 minimum: 301 Jul 11 04:43:35.788666 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 04:43:35.788674 kernel: landlock: Up and running. Jul 11 04:43:35.788680 kernel: SELinux: Initializing. Jul 11 04:43:35.788687 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 04:43:35.788693 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 04:43:35.788700 kernel: rcu: Hierarchical SRCU implementation. Jul 11 04:43:35.788706 kernel: rcu: Max phase no-delay instances is 400. Jul 11 04:43:35.788713 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 04:43:35.788719 kernel: Remapping and enabling EFI services. Jul 11 04:43:35.788726 kernel: smp: Bringing up secondary CPUs ... Jul 11 04:43:35.788738 kernel: Detected PIPT I-cache on CPU1 Jul 11 04:43:35.788745 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 04:43:35.788752 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 11 04:43:35.788761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 04:43:35.788767 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 04:43:35.788775 kernel: Detected PIPT I-cache on CPU2 Jul 11 04:43:35.788782 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 04:43:35.788789 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 11 04:43:35.788798 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 04:43:35.788804 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 04:43:35.788811 kernel: Detected PIPT I-cache on CPU3 Jul 11 04:43:35.788818 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 04:43:35.788825 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 11 04:43:35.788832 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 04:43:35.788838 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 04:43:35.788845 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 04:43:35.788852 kernel: SMP: Total of 4 processors activated. Jul 11 04:43:35.788860 kernel: CPU: All CPU(s) started at EL1 Jul 11 04:43:35.788867 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 04:43:35.788874 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 04:43:35.788881 kernel: CPU features: detected: Common not Private translations Jul 11 04:43:35.788887 kernel: CPU features: detected: CRC32 instructions Jul 11 04:43:35.788894 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 04:43:35.788901 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 04:43:35.788908 kernel: CPU features: detected: LSE atomic instructions Jul 11 04:43:35.788915 kernel: CPU features: detected: Privileged Access Never Jul 11 04:43:35.788923 kernel: CPU features: detected: RAS Extension Support Jul 11 04:43:35.788930 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 04:43:35.788936 kernel: alternatives: applying system-wide alternatives Jul 11 04:43:35.788943 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 11 04:43:35.788951 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 11 04:43:35.788957 kernel: devtmpfs: initialized Jul 11 04:43:35.788964 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 04:43:35.788971 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 04:43:35.788978 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 04:43:35.788986 kernel: 0 pages in range for non-PLT usage Jul 11 04:43:35.788993 kernel: 508448 pages in range for PLT usage Jul 11 04:43:35.788999 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 04:43:35.789006 kernel: SMBIOS 3.0.0 present. Jul 11 04:43:35.789013 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 11 04:43:35.789020 kernel: DMI: Memory slots populated: 1/1 Jul 11 04:43:35.789026 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 04:43:35.789033 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 04:43:35.789040 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 04:43:35.789048 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 04:43:35.789055 kernel: audit: initializing netlink subsys (disabled) Jul 11 04:43:35.789062 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 11 04:43:35.789069 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 04:43:35.789076 kernel: cpuidle: using governor menu Jul 11 04:43:35.789083 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 04:43:35.789090 kernel: ASID allocator initialised with 32768 entries Jul 11 04:43:35.789096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 04:43:35.789103 kernel: Serial: AMBA PL011 UART driver Jul 11 04:43:35.789111 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 04:43:35.789118 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 04:43:35.789125 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 04:43:35.789132 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 04:43:35.789139 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 04:43:35.789145 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 04:43:35.789152 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 04:43:35.789159 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 04:43:35.789166 kernel: ACPI: Added _OSI(Module Device) Jul 11 04:43:35.789174 kernel: ACPI: Added _OSI(Processor Device) Jul 11 04:43:35.789181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 04:43:35.789187 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 04:43:35.789194 kernel: ACPI: Interpreter enabled Jul 11 04:43:35.789201 kernel: ACPI: Using GIC for interrupt routing Jul 11 04:43:35.789208 kernel: ACPI: MCFG table detected, 1 entries Jul 11 04:43:35.789217 kernel: ACPI: CPU0 has been hot-added Jul 11 04:43:35.789223 kernel: ACPI: CPU1 has been hot-added Jul 11 04:43:35.789230 kernel: ACPI: CPU2 has been hot-added Jul 11 04:43:35.789237 kernel: ACPI: CPU3 has been hot-added Jul 11 04:43:35.789245 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 04:43:35.789252 kernel: printk: legacy console [ttyAMA0] enabled Jul 11 04:43:35.789259 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 04:43:35.789413 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 04:43:35.789480 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 04:43:35.789539 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 04:43:35.789596 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 04:43:35.789655 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 04:43:35.789664 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 04:43:35.789671 kernel: PCI host bridge to bus 0000:00 Jul 11 04:43:35.789735 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 04:43:35.789788 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 04:43:35.789842 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 04:43:35.789905 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 04:43:35.789981 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 11 04:43:35.790050 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 04:43:35.790111 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 11 04:43:35.790170 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 11 04:43:35.790228 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 04:43:35.790287 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 11 04:43:35.790386 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 11 04:43:35.790453 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 11 04:43:35.790509 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 04:43:35.790561 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 04:43:35.790616 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 04:43:35.790625 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 04:43:35.790633 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 04:43:35.790640 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 04:43:35.790649 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 04:43:35.790656 kernel: iommu: Default domain type: Translated Jul 11 04:43:35.790663 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 04:43:35.790670 kernel: efivars: Registered efivars operations Jul 11 04:43:35.790677 kernel: vgaarb: loaded Jul 11 04:43:35.790684 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 04:43:35.790691 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 04:43:35.790698 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 04:43:35.790705 kernel: pnp: PnP ACPI init Jul 11 04:43:35.790778 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 04:43:35.790788 kernel: pnp: PnP ACPI: found 1 devices Jul 11 04:43:35.790795 kernel: NET: Registered PF_INET protocol family Jul 11 04:43:35.790802 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 04:43:35.790809 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 04:43:35.790816 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 04:43:35.790823 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 04:43:35.790830 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 04:43:35.790838 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 04:43:35.790845 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 04:43:35.790852 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 04:43:35.790859 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 04:43:35.790866 kernel: PCI: CLS 0 bytes, default 64 Jul 11 04:43:35.790873 kernel: kvm [1]: HYP mode not available Jul 11 04:43:35.790879 kernel: Initialise system trusted keyrings Jul 11 04:43:35.790886 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 04:43:35.790893 kernel: Key type asymmetric registered Jul 11 04:43:35.790901 kernel: Asymmetric key parser 'x509' registered Jul 11 04:43:35.790908 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 11 04:43:35.790915 kernel: io scheduler mq-deadline registered Jul 11 04:43:35.790922 kernel: io scheduler kyber registered Jul 11 04:43:35.790929 kernel: io scheduler bfq registered Jul 11 04:43:35.790936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 04:43:35.790943 kernel: ACPI: button: Power Button [PWRB] Jul 11 04:43:35.790950 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 04:43:35.791010 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 04:43:35.791020 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 04:43:35.791027 kernel: thunder_xcv, ver 1.0 Jul 11 04:43:35.791034 kernel: thunder_bgx, ver 1.0 Jul 11 04:43:35.791041 kernel: nicpf, ver 1.0 Jul 11 04:43:35.791048 kernel: nicvf, ver 1.0 Jul 11 04:43:35.791113 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 04:43:35.791168 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T04:43:35 UTC (1752209015) Jul 11 04:43:35.791177 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 04:43:35.791186 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 11 04:43:35.791193 kernel: watchdog: NMI not fully supported Jul 11 04:43:35.791200 kernel: watchdog: Hard watchdog permanently disabled Jul 11 04:43:35.791207 kernel: NET: Registered PF_INET6 protocol family Jul 11 04:43:35.791214 kernel: Segment Routing with IPv6 Jul 11 04:43:35.791221 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 04:43:35.791228 kernel: NET: Registered PF_PACKET protocol family Jul 11 04:43:35.791234 kernel: Key type dns_resolver registered Jul 11 04:43:35.791241 kernel: registered taskstats version 1 Jul 11 04:43:35.791248 kernel: Loading compiled-in X.509 certificates Jul 11 04:43:35.791256 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: e555124bc12a1bc970fb227548e219a82d747130' Jul 11 04:43:35.791263 kernel: Demotion targets for Node 0: null Jul 11 04:43:35.791270 kernel: Key type .fscrypt registered Jul 11 04:43:35.791277 kernel: Key type fscrypt-provisioning registered Jul 11 04:43:35.791283 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 04:43:35.791290 kernel: ima: Allocated hash algorithm: sha1 Jul 11 04:43:35.791304 kernel: ima: No architecture policies found Jul 11 04:43:35.791319 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 04:43:35.791329 kernel: clk: Disabling unused clocks Jul 11 04:43:35.791336 kernel: PM: genpd: Disabling unused power domains Jul 11 04:43:35.791343 kernel: Warning: unable to open an initial console. Jul 11 04:43:35.791350 kernel: Freeing unused kernel memory: 39424K Jul 11 04:43:35.791357 kernel: Run /init as init process Jul 11 04:43:35.791363 kernel: with arguments: Jul 11 04:43:35.791370 kernel: /init Jul 11 04:43:35.791377 kernel: with environment: Jul 11 04:43:35.791384 kernel: HOME=/ Jul 11 04:43:35.791392 kernel: TERM=linux Jul 11 04:43:35.791399 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 04:43:35.791407 systemd[1]: Successfully made /usr/ read-only. Jul 11 04:43:35.791416 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 04:43:35.791424 systemd[1]: Detected virtualization kvm. Jul 11 04:43:35.791431 systemd[1]: Detected architecture arm64. Jul 11 04:43:35.791438 systemd[1]: Running in initrd. Jul 11 04:43:35.791446 systemd[1]: No hostname configured, using default hostname. Jul 11 04:43:35.791455 systemd[1]: Hostname set to . Jul 11 04:43:35.791462 systemd[1]: Initializing machine ID from VM UUID. Jul 11 04:43:35.791470 systemd[1]: Queued start job for default target initrd.target. Jul 11 04:43:35.791477 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 04:43:35.791485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 04:43:35.791493 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 04:43:35.791501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 04:43:35.791508 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 04:43:35.791518 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 04:43:35.791526 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 04:43:35.791534 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 04:43:35.791541 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 04:43:35.791548 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 04:43:35.791556 systemd[1]: Reached target paths.target - Path Units. Jul 11 04:43:35.791564 systemd[1]: Reached target slices.target - Slice Units. Jul 11 04:43:35.791572 systemd[1]: Reached target swap.target - Swaps. Jul 11 04:43:35.791579 systemd[1]: Reached target timers.target - Timer Units. Jul 11 04:43:35.791586 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 04:43:35.791594 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 04:43:35.791601 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 04:43:35.791609 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 04:43:35.791616 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 04:43:35.791624 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 04:43:35.791633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 04:43:35.791640 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 04:43:35.791648 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 04:43:35.791655 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 04:43:35.791663 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 04:43:35.791671 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 04:43:35.791678 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 04:43:35.791686 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 04:43:35.791693 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 04:43:35.791702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 04:43:35.791710 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 04:43:35.791718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 04:43:35.791725 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 04:43:35.791734 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 04:43:35.791758 systemd-journald[242]: Collecting audit messages is disabled. Jul 11 04:43:35.791777 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 04:43:35.791790 systemd-journald[242]: Journal started Jul 11 04:43:35.791808 systemd-journald[242]: Runtime Journal (/run/log/journal/fd496b807bb3445da9d1b3a98ec28fad) is 6M, max 48.5M, 42.4M free. Jul 11 04:43:35.795133 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 04:43:35.795237 systemd-modules-load[243]: Inserted module 'overlay' Jul 11 04:43:35.802227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 04:43:35.804258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 04:43:35.811361 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 04:43:35.816426 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 04:43:35.819740 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 04:43:35.820755 systemd-modules-load[243]: Inserted module 'br_netfilter' Jul 11 04:43:35.822413 kernel: Bridge firewalling registered Jul 11 04:43:35.821681 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 04:43:35.821767 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 04:43:35.824620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 04:43:35.826575 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 04:43:35.833370 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 04:43:35.835982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 04:43:35.837748 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 04:43:35.841175 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 04:43:35.843964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 04:43:35.865145 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3897e9e5bdb5872ff4c86729cf311c0e9d40949a2432461ec9aeef8c2526e01 Jul 11 04:43:35.878926 systemd-resolved[291]: Positive Trust Anchors: Jul 11 04:43:35.878946 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 04:43:35.878976 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 04:43:35.883796 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 11 04:43:35.885065 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 04:43:35.888377 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 04:43:35.940340 kernel: SCSI subsystem initialized Jul 11 04:43:35.945332 kernel: Loading iSCSI transport class v2.0-870. Jul 11 04:43:35.955339 kernel: iscsi: registered transport (tcp) Jul 11 04:43:35.968346 kernel: iscsi: registered transport (qla4xxx) Jul 11 04:43:35.968401 kernel: QLogic iSCSI HBA Driver Jul 11 04:43:35.985979 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 04:43:36.005180 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 04:43:36.006758 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 04:43:36.054096 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 04:43:36.057466 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 04:43:36.118356 kernel: raid6: neonx8 gen() 15771 MB/s Jul 11 04:43:36.135330 kernel: raid6: neonx4 gen() 15808 MB/s Jul 11 04:43:36.152335 kernel: raid6: neonx2 gen() 13204 MB/s Jul 11 04:43:36.169331 kernel: raid6: neonx1 gen() 10451 MB/s Jul 11 04:43:36.186340 kernel: raid6: int64x8 gen() 6895 MB/s Jul 11 04:43:36.203357 kernel: raid6: int64x4 gen() 7343 MB/s Jul 11 04:43:36.220348 kernel: raid6: int64x2 gen() 6093 MB/s Jul 11 04:43:36.237476 kernel: raid6: int64x1 gen() 5043 MB/s Jul 11 04:43:36.237517 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s Jul 11 04:43:36.255377 kernel: raid6: .... xor() 12328 MB/s, rmw enabled Jul 11 04:43:36.255418 kernel: raid6: using neon recovery algorithm Jul 11 04:43:36.260334 kernel: xor: measuring software checksum speed Jul 11 04:43:36.261480 kernel: 8regs : 18090 MB/sec Jul 11 04:43:36.261492 kernel: 32regs : 21681 MB/sec Jul 11 04:43:36.262720 kernel: arm64_neon : 27974 MB/sec Jul 11 04:43:36.262739 kernel: xor: using function: arm64_neon (27974 MB/sec) Jul 11 04:43:36.316356 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 04:43:36.323390 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 04:43:36.325804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 04:43:36.351559 systemd-udevd[500]: Using default interface naming scheme 'v255'. Jul 11 04:43:36.355587 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 04:43:36.357547 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 04:43:36.383369 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 11 04:43:36.405628 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 04:43:36.407859 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 04:43:36.457998 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 04:43:36.460907 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 04:43:36.512426 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 04:43:36.518386 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 04:43:36.518542 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 04:43:36.519584 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 04:43:36.527436 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 04:43:36.532423 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 04:43:36.532443 kernel: GPT:9289727 != 19775487 Jul 11 04:43:36.532453 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 04:43:36.532462 kernel: GPT:9289727 != 19775487 Jul 11 04:43:36.532470 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 04:43:36.532479 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 04:43:36.532140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 04:43:36.555625 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 04:43:36.556982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 04:43:36.563833 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 04:43:36.575955 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 04:43:36.586106 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 04:43:36.587276 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 04:43:36.596212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 04:43:36.597407 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 04:43:36.599371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 04:43:36.601356 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 04:43:36.603966 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 04:43:36.605684 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 04:43:36.633578 disk-uuid[593]: Primary Header is updated. Jul 11 04:43:36.633578 disk-uuid[593]: Secondary Entries is updated. Jul 11 04:43:36.633578 disk-uuid[593]: Secondary Header is updated. Jul 11 04:43:36.637332 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 04:43:36.639800 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 04:43:37.650024 disk-uuid[596]: The operation has completed successfully. Jul 11 04:43:37.652175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 04:43:37.677323 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 04:43:37.677423 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 04:43:37.701055 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 04:43:37.725096 sh[612]: Success Jul 11 04:43:37.742155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 04:43:37.742197 kernel: device-mapper: uevent: version 1.0.3 Jul 11 04:43:37.742215 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 04:43:37.753342 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 11 04:43:37.784123 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 04:43:37.786483 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 04:43:37.803499 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 04:43:37.809357 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 04:43:37.809391 kernel: BTRFS: device fsid 3cc53545-bcff-43a4-a907-3a89bda31132 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (624) Jul 11 04:43:37.810895 kernel: BTRFS info (device dm-0): first mount of filesystem 3cc53545-bcff-43a4-a907-3a89bda31132 Jul 11 04:43:37.810917 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 04:43:37.812420 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 04:43:37.820535 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 04:43:37.821857 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 04:43:37.823229 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 04:43:37.824072 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 04:43:37.828077 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 04:43:37.850354 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 11 04:43:37.850430 kernel: BTRFS info (device vda6): first mount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 04:43:37.850449 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 04:43:37.851909 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 04:43:37.858329 kernel: BTRFS info (device vda6): last unmount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 04:43:37.859787 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 04:43:37.862473 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 04:43:37.957265 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 04:43:37.964471 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 04:43:38.010183 systemd-networkd[797]: lo: Link UP Jul 11 04:43:38.010195 systemd-networkd[797]: lo: Gained carrier Jul 11 04:43:38.010966 systemd-networkd[797]: Enumeration completed Jul 11 04:43:38.011396 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 04:43:38.011704 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 04:43:38.011708 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 04:43:38.013300 systemd-networkd[797]: eth0: Link UP Jul 11 04:43:38.013304 systemd-networkd[797]: eth0: Gained carrier Jul 11 04:43:38.013383 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 04:43:38.014062 systemd[1]: Reached target network.target - Network. Jul 11 04:43:38.040364 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 04:43:38.074942 ignition[704]: Ignition 2.21.0 Jul 11 04:43:38.074958 ignition[704]: Stage: fetch-offline Jul 11 04:43:38.074988 ignition[704]: no configs at "/usr/lib/ignition/base.d" Jul 11 04:43:38.074996 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 04:43:38.075237 ignition[704]: parsed url from cmdline: "" Jul 11 04:43:38.075241 ignition[704]: no config URL provided Jul 11 04:43:38.075245 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 04:43:38.075252 ignition[704]: no config at "/usr/lib/ignition/user.ign" Jul 11 04:43:38.075271 ignition[704]: op(1): [started] loading QEMU firmware config module Jul 11 04:43:38.075276 ignition[704]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 04:43:38.091693 ignition[704]: op(1): [finished] loading QEMU firmware config module Jul 11 04:43:38.133151 ignition[704]: parsing config with SHA512: ed5ac56a6602423a7da83c4d369449aed77df6ae0e805a8d0a252a10fcd3d5778d350aa1ad7b48d3433bc49e727f30297a6dc188096a74e77eef2fe52f4891b6 Jul 11 04:43:38.137596 unknown[704]: fetched base config from "system" Jul 11 04:43:38.137611 unknown[704]: fetched user config from "qemu" Jul 11 04:43:38.137970 ignition[704]: fetch-offline: fetch-offline passed Jul 11 04:43:38.138040 ignition[704]: Ignition finished successfully Jul 11 04:43:38.141298 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 04:43:38.143016 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 04:43:38.144023 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 04:43:38.185258 ignition[810]: Ignition 2.21.0 Jul 11 04:43:38.185277 ignition[810]: Stage: kargs Jul 11 04:43:38.185447 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 11 04:43:38.185457 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 04:43:38.186829 ignition[810]: kargs: kargs passed Jul 11 04:43:38.187706 ignition[810]: Ignition finished successfully Jul 11 04:43:38.189960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 04:43:38.193117 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 04:43:38.219368 ignition[817]: Ignition 2.21.0 Jul 11 04:43:38.219383 ignition[817]: Stage: disks Jul 11 04:43:38.219517 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jul 11 04:43:38.219526 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 04:43:38.222166 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 04:43:38.220269 ignition[817]: disks: disks passed Jul 11 04:43:38.225133 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 04:43:38.220342 ignition[817]: Ignition finished successfully Jul 11 04:43:38.226985 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 04:43:38.228681 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 04:43:38.230483 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 04:43:38.232281 systemd[1]: Reached target basic.target - Basic System. Jul 11 04:43:38.235402 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 04:43:38.261368 systemd-fsck[826]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 04:43:38.266506 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 04:43:38.268793 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 04:43:38.353342 kernel: EXT4-fs (vda9): mounted filesystem 1377db55-4b0b-44d7-86ad-f9343775ed75 r/w with ordered data mode. Quota mode: none. Jul 11 04:43:38.353497 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 04:43:38.354777 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 04:43:38.357542 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 04:43:38.359306 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 04:43:38.360539 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 04:43:38.360582 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 04:43:38.360605 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 04:43:38.375244 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 04:43:38.377932 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 04:43:38.383353 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (834) Jul 11 04:43:38.383383 kernel: BTRFS info (device vda6): first mount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 04:43:38.385810 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 04:43:38.385854 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 04:43:38.390600 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 04:43:38.431459 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 04:43:38.435580 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 11 04:43:38.439065 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 04:43:38.443652 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 04:43:38.519779 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 04:43:38.521761 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 04:43:38.523273 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 04:43:38.548339 kernel: BTRFS info (device vda6): last unmount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 04:43:38.563203 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 04:43:38.573718 ignition[950]: INFO : Ignition 2.21.0 Jul 11 04:43:38.573718 ignition[950]: INFO : Stage: mount Jul 11 04:43:38.575335 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 04:43:38.575335 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 04:43:38.578471 ignition[950]: INFO : mount: mount passed Jul 11 04:43:38.578471 ignition[950]: INFO : Ignition finished successfully Jul 11 04:43:38.578049 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 04:43:38.580375 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 04:43:38.808168 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 04:43:38.809678 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 04:43:38.829417 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (962) Jul 11 04:43:38.836377 kernel: BTRFS info (device vda6): first mount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 04:43:38.836396 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 04:43:38.836406 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 04:43:38.845731 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 04:43:38.877174 ignition[979]: INFO : Ignition 2.21.0 Jul 11 04:43:38.877174 ignition[979]: INFO : Stage: files Jul 11 04:43:38.879801 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 04:43:38.879801 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 04:43:38.879801 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Jul 11 04:43:38.883152 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 04:43:38.883152 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 04:43:38.886013 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 04:43:38.886013 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 04:43:38.886013 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 04:43:38.885686 unknown[979]: wrote ssh authorized keys file for user: core Jul 11 04:43:38.891297 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 04:43:38.891297 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 11 04:43:39.023013 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 04:43:39.192094 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 04:43:39.192094 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 04:43:39.195739 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 11 04:43:39.572150 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 11 04:43:39.719540 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 04:43:39.721310 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 04:43:39.734867 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 04:43:39.734867 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 04:43:39.734867 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 04:43:39.734867 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 04:43:39.734867 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 04:43:39.734867 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 11 04:43:39.910625 systemd-networkd[797]: eth0: Gained IPv6LL Jul 11 04:43:40.149861 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 11 04:43:41.137964 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 04:43:41.137964 ignition[979]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 11 04:43:41.141375 ignition[979]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 04:43:41.183276 ignition[979]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 04:43:41.183276 ignition[979]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 11 04:43:41.183276 ignition[979]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 11 04:43:41.188446 ignition[979]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 04:43:41.188446 ignition[979]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 04:43:41.188446 ignition[979]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 11 04:43:41.188446 ignition[979]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 04:43:41.214844 ignition[979]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 04:43:41.218286 ignition[979]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 04:43:41.220488 ignition[979]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 04:43:41.220488 ignition[979]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 11 04:43:41.220488 ignition[979]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 04:43:41.220488 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 04:43:41.220488 ignition[979]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 04:43:41.220488 ignition[979]: INFO : files: files passed Jul 11 04:43:41.220488 ignition[979]: INFO : Ignition finished successfully Jul 11 04:43:41.222570 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 04:43:41.231382 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 04:43:41.236446 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 04:43:41.253371 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 04:43:41.253476 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 04:43:41.257005 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 04:43:41.258350 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 04:43:41.258350 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 04:43:41.261471 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 04:43:41.262448 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 04:43:41.264482 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 04:43:41.267191 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 04:43:41.312621 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 04:43:41.312738 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 04:43:41.314978 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 04:43:41.316879 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 04:43:41.318694 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 04:43:41.321446 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 04:43:41.341929 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 04:43:41.344289 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 04:43:41.367173 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 04:43:41.369541 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 04:43:41.371845 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 04:43:41.372799 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 04:43:41.372913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 04:43:41.375405 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 04:43:41.377351 systemd[1]: Stopped target basic.target - Basic System. Jul 11 04:43:41.379024 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 04:43:41.380720 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 04:43:41.382567 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 04:43:41.384446 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 04:43:41.386384 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 04:43:41.388238 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 04:43:41.390164 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 04:43:41.392092 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 04:43:41.394064 systemd[1]: Stopped target swap.target - Swaps. Jul 11 04:43:41.395575 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 04:43:41.395707 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 04:43:41.398057 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 04:43:41.400029 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 04:43:41.401898 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 04:43:41.402020 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 04:43:41.403938 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 04:43:41.404052 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 04:43:41.406724 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 04:43:41.406830 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 04:43:41.408827 systemd[1]: Stopped target paths.target - Path Units. Jul 11 04:43:41.410352 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 04:43:41.416375 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 04:43:41.417622 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 04:43:41.419670 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 04:43:41.421201 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 04:43:41.421284 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 04:43:41.422831 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 04:43:41.422906 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 04:43:41.424415 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 04:43:41.424524 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 04:43:41.426247 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 04:43:41.426366 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 04:43:41.428564 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 04:43:41.430974 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 04:43:41.439380 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 04:43:41.439501 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 04:43:41.441406 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 04:43:41.441504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 04:43:41.446571 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 04:43:41.456547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 04:43:41.465622 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 04:43:41.477348 ignition[1034]: INFO : Ignition 2.21.0 Jul 11 04:43:41.477348 ignition[1034]: INFO : Stage: umount Jul 11 04:43:41.477348 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 04:43:41.477348 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 04:43:41.481373 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 04:43:41.481459 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 04:43:41.486669 ignition[1034]: INFO : umount: umount passed Jul 11 04:43:41.486669 ignition[1034]: INFO : Ignition finished successfully Jul 11 04:43:41.488623 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 04:43:41.488712 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 04:43:41.490923 systemd[1]: Stopped target network.target - Network. Jul 11 04:43:41.492719 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 04:43:41.492792 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 04:43:41.494275 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 04:43:41.494348 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 04:43:41.497870 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 04:43:41.497921 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 04:43:41.499552 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 04:43:41.499592 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 04:43:41.501252 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 04:43:41.501323 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 04:43:41.503178 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 04:43:41.504838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 04:43:41.514516 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 04:43:41.515389 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 04:43:41.518796 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 04:43:41.518987 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 04:43:41.519077 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 04:43:41.521140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 04:43:41.521786 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 04:43:41.523764 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 04:43:41.523802 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 04:43:41.526734 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 04:43:41.528111 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 04:43:41.528184 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 04:43:41.532425 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 04:43:41.532476 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 04:43:41.536545 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 04:43:41.536592 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 04:43:41.538654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 04:43:41.538736 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 04:43:41.543502 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 04:43:41.547794 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 04:43:41.547856 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 04:43:41.560125 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 04:43:41.560237 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 04:43:41.562434 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 04:43:41.562552 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 04:43:41.564647 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 04:43:41.564711 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 04:43:41.565873 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 04:43:41.565906 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 04:43:41.568040 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 04:43:41.568089 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 04:43:41.570806 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 04:43:41.570854 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 04:43:41.573439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 04:43:41.573487 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 04:43:41.576132 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 04:43:41.577295 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 04:43:41.577362 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 04:43:41.580105 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 04:43:41.580148 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 04:43:41.583247 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 04:43:41.583299 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 04:43:41.586424 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 04:43:41.586465 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 04:43:41.588896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 04:43:41.588938 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 04:43:41.592835 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 04:43:41.592879 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 11 04:43:41.592904 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 04:43:41.592933 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 04:43:41.593199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 04:43:41.593326 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 04:43:41.595957 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 04:43:41.597995 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 04:43:41.611762 systemd[1]: Switching root. Jul 11 04:43:41.648494 systemd-journald[242]: Journal stopped Jul 11 04:43:42.435183 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jul 11 04:43:42.435233 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 04:43:42.435249 kernel: SELinux: policy capability open_perms=1 Jul 11 04:43:42.435258 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 04:43:42.435270 kernel: SELinux: policy capability always_check_network=0 Jul 11 04:43:42.435294 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 04:43:42.435308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 04:43:42.435344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 04:43:42.435355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 04:43:42.435363 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 04:43:42.435373 kernel: audit: type=1403 audit(1752209021.818:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 04:43:42.435383 systemd[1]: Successfully loaded SELinux policy in 59.739ms. Jul 11 04:43:42.435402 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.140ms. Jul 11 04:43:42.435416 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 04:43:42.435427 systemd[1]: Detected virtualization kvm. Jul 11 04:43:42.435438 systemd[1]: Detected architecture arm64. Jul 11 04:43:42.435448 systemd[1]: Detected first boot. Jul 11 04:43:42.435458 systemd[1]: Initializing machine ID from VM UUID. Jul 11 04:43:42.435467 kernel: NET: Registered PF_VSOCK protocol family Jul 11 04:43:42.435476 zram_generator::config[1079]: No configuration found. Jul 11 04:43:42.435487 systemd[1]: Populated /etc with preset unit settings. Jul 11 04:43:42.435497 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 04:43:42.435508 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 04:43:42.435521 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 04:43:42.435533 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 04:43:42.435542 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 04:43:42.435552 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 04:43:42.435562 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 04:43:42.435571 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 04:43:42.435581 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 04:43:42.435591 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 04:43:42.435601 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 04:43:42.435611 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 04:43:42.435621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 04:43:42.435631 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 04:43:42.435641 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 04:43:42.435651 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 04:43:42.435661 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 04:43:42.435671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 04:43:42.435681 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 04:43:42.435691 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 04:43:42.435702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 04:43:42.435715 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 04:43:42.435725 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 04:43:42.435735 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 04:43:42.435745 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 04:43:42.435757 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 04:43:42.435766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 04:43:42.435776 systemd[1]: Reached target slices.target - Slice Units. Jul 11 04:43:42.435787 systemd[1]: Reached target swap.target - Swaps. Jul 11 04:43:42.435797 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 04:43:42.435807 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 04:43:42.435817 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 04:43:42.435826 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 04:43:42.435836 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 04:43:42.435846 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 04:43:42.435856 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 04:43:42.435865 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 04:43:42.435876 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 04:43:42.435887 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 04:43:42.435896 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 04:43:42.435906 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 04:43:42.435916 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 04:43:42.435926 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 04:43:42.435936 systemd[1]: Reached target machines.target - Containers. Jul 11 04:43:42.435945 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 04:43:42.435956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 04:43:42.435966 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 04:43:42.435976 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 04:43:42.435986 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 04:43:42.435996 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 04:43:42.436007 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 04:43:42.436017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 04:43:42.436026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 04:43:42.436037 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 04:43:42.436048 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 04:43:42.436057 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 04:43:42.436067 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 04:43:42.436077 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 04:43:42.436088 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 04:43:42.436098 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 04:43:42.436107 kernel: loop: module loaded Jul 11 04:43:42.436116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 04:43:42.436127 kernel: fuse: init (API version 7.41) Jul 11 04:43:42.436137 kernel: ACPI: bus type drm_connector registered Jul 11 04:43:42.436146 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 04:43:42.436156 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 04:43:42.436166 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 04:43:42.436175 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 04:43:42.436187 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 04:43:42.436196 systemd[1]: Stopped verity-setup.service. Jul 11 04:43:42.436207 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 04:43:42.436217 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 04:43:42.436245 systemd-journald[1159]: Collecting audit messages is disabled. Jul 11 04:43:42.436269 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 04:43:42.436285 systemd-journald[1159]: Journal started Jul 11 04:43:42.436305 systemd-journald[1159]: Runtime Journal (/run/log/journal/fd496b807bb3445da9d1b3a98ec28fad) is 6M, max 48.5M, 42.4M free. Jul 11 04:43:42.441357 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 04:43:42.441387 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 04:43:42.441400 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 04:43:42.203964 systemd[1]: Queued start job for default target multi-user.target. Jul 11 04:43:42.227331 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 04:43:42.227714 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 04:43:42.443341 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 04:43:42.446349 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 04:43:42.447757 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 04:43:42.449297 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 04:43:42.449482 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 04:43:42.450879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 04:43:42.451029 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 04:43:42.453700 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 04:43:42.453867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 04:43:42.455153 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 04:43:42.455346 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 04:43:42.456871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 04:43:42.457033 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 04:43:42.458412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 04:43:42.458571 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 04:43:42.459872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 04:43:42.461457 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 04:43:42.462905 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 04:43:42.464762 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 04:43:42.475286 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 04:43:42.477648 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 04:43:42.479755 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 04:43:42.481013 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 04:43:42.481045 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 04:43:42.483056 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 04:43:42.492146 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 04:43:42.493309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 04:43:42.494579 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 04:43:42.496368 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 04:43:42.497592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 04:43:42.498379 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 04:43:42.499459 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 04:43:42.504454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 04:43:42.508079 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 04:43:42.508427 systemd-journald[1159]: Time spent on flushing to /var/log/journal/fd496b807bb3445da9d1b3a98ec28fad is 15.186ms for 891 entries. Jul 11 04:43:42.508427 systemd-journald[1159]: System Journal (/var/log/journal/fd496b807bb3445da9d1b3a98ec28fad) is 8M, max 195.6M, 187.6M free. Jul 11 04:43:42.535133 systemd-journald[1159]: Received client request to flush runtime journal. Jul 11 04:43:42.535178 kernel: loop0: detected capacity change from 0 to 105936 Jul 11 04:43:42.511184 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 04:43:42.515338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 04:43:42.517581 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 04:43:42.518910 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 04:43:42.520563 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 04:43:42.526716 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 04:43:42.530866 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 04:43:42.532669 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 04:43:42.544391 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 04:43:42.551347 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 04:43:42.555257 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 11 04:43:42.555289 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 11 04:43:42.557698 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 04:43:42.561767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 04:43:42.564689 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 04:43:42.569373 kernel: loop1: detected capacity change from 0 to 134232 Jul 11 04:43:42.594333 kernel: loop2: detected capacity change from 0 to 203944 Jul 11 04:43:42.610663 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 04:43:42.613900 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 04:43:42.618358 kernel: loop3: detected capacity change from 0 to 105936 Jul 11 04:43:42.625330 kernel: loop4: detected capacity change from 0 to 134232 Jul 11 04:43:42.633423 kernel: loop5: detected capacity change from 0 to 203944 Jul 11 04:43:42.637252 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 11 04:43:42.637264 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 11 04:43:42.638459 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 04:43:42.638878 (sd-merge)[1222]: Merged extensions into '/usr'. Jul 11 04:43:42.640993 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 04:43:42.644434 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 04:43:42.644450 systemd[1]: Reloading... Jul 11 04:43:42.721532 zram_generator::config[1251]: No configuration found. Jul 11 04:43:42.769859 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 04:43:42.801603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 04:43:42.875976 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 04:43:42.876369 systemd[1]: Reloading finished in 231 ms. Jul 11 04:43:42.906996 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 04:43:42.908736 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 04:43:42.922536 systemd[1]: Starting ensure-sysext.service... Jul 11 04:43:42.924303 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 04:43:42.936259 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Jul 11 04:43:42.936280 systemd[1]: Reloading... Jul 11 04:43:42.939043 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 04:43:42.939074 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 04:43:42.939360 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 04:43:42.939552 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 04:43:42.940156 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 04:43:42.940400 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 11 04:43:42.940454 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 11 04:43:42.942959 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 04:43:42.942974 systemd-tmpfiles[1286]: Skipping /boot Jul 11 04:43:42.948697 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 04:43:42.948713 systemd-tmpfiles[1286]: Skipping /boot Jul 11 04:43:42.986348 zram_generator::config[1316]: No configuration found. Jul 11 04:43:43.052211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 04:43:43.126859 systemd[1]: Reloading finished in 190 ms. Jul 11 04:43:43.145099 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 04:43:43.150742 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 04:43:43.164332 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 04:43:43.166512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 04:43:43.168751 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 04:43:43.173451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 04:43:43.175971 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 04:43:43.179607 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 04:43:43.189168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 04:43:43.201381 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 04:43:43.204419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 04:43:43.207028 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 04:43:43.209302 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 04:43:43.209453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 04:43:43.211076 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 04:43:43.213373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 04:43:43.213537 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 04:43:43.215552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 04:43:43.215706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 04:43:43.217612 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 04:43:43.217753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 04:43:43.219425 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 04:43:43.229718 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Jul 11 04:43:43.231235 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 04:43:43.235220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 04:43:43.236732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 04:43:43.238867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 04:43:43.241533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 04:43:43.242641 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 04:43:43.242767 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 04:43:43.246004 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 04:43:43.249564 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 04:43:43.250529 augenrules[1388]: No rules Jul 11 04:43:43.250569 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 04:43:43.252072 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 04:43:43.252286 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 04:43:43.253777 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 04:43:43.253937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 04:43:43.255593 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 04:43:43.255729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 04:43:43.257424 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 04:43:43.262709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 04:43:43.264247 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 04:43:43.267131 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 04:43:43.281408 systemd[1]: Finished ensure-sysext.service. Jul 11 04:43:43.292536 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 04:43:43.293634 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 04:43:43.296566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 04:43:43.298609 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 04:43:43.309150 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 04:43:43.311385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 04:43:43.313016 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 04:43:43.313061 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 04:43:43.315811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 04:43:43.319338 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 04:43:43.320388 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 04:43:43.321018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 04:43:43.321193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 04:43:43.322661 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 04:43:43.322823 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 04:43:43.324114 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 04:43:43.324256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 04:43:43.325978 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 04:43:43.326363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 04:43:43.334245 augenrules[1427]: /sbin/augenrules: No change Jul 11 04:43:43.334966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 04:43:43.335050 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 04:43:43.336838 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 04:43:43.345631 augenrules[1457]: No rules Jul 11 04:43:43.348372 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 04:43:43.348633 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 04:43:43.357819 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 04:43:43.397086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 04:43:43.399692 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 04:43:43.436761 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 04:43:43.463711 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 04:43:43.465448 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 04:43:43.473416 systemd-resolved[1352]: Positive Trust Anchors: Jul 11 04:43:43.473434 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 04:43:43.473466 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 04:43:43.474040 systemd-networkd[1438]: lo: Link UP Jul 11 04:43:43.474046 systemd-networkd[1438]: lo: Gained carrier Jul 11 04:43:43.474904 systemd-networkd[1438]: Enumeration completed Jul 11 04:43:43.475016 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 04:43:43.475370 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 04:43:43.475379 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 04:43:43.478428 systemd-networkd[1438]: eth0: Link UP Jul 11 04:43:43.478556 systemd-networkd[1438]: eth0: Gained carrier Jul 11 04:43:43.478571 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 04:43:43.478947 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 04:43:43.481420 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 11 04:43:43.482498 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 04:43:43.484652 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 04:43:43.486448 systemd[1]: Reached target network.target - Network. Jul 11 04:43:43.487331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 04:43:43.489473 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 04:43:43.490642 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 04:43:43.492479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 04:43:43.495999 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 04:43:43.497972 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 04:43:43.499383 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 04:43:43.500710 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 04:43:43.500744 systemd[1]: Reached target paths.target - Path Units. Jul 11 04:43:43.501741 systemd[1]: Reached target timers.target - Timer Units. Jul 11 04:43:43.503440 systemd-networkd[1438]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 04:43:43.503614 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 04:43:43.506060 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 04:43:43.510601 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 04:43:43.511935 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Jul 11 04:43:43.512026 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 04:43:43.513561 systemd-timesyncd[1439]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 04:43:43.513686 systemd-timesyncd[1439]: Initial clock synchronization to Fri 2025-07-11 04:43:43.589040 UTC. Jul 11 04:43:43.513784 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 04:43:43.517421 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 04:43:43.518893 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 04:43:43.522366 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 04:43:43.524372 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 04:43:43.526123 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 04:43:43.527518 systemd[1]: Reached target basic.target - Basic System. Jul 11 04:43:43.529296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 04:43:43.529513 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 04:43:43.530845 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 04:43:43.534462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 04:43:43.536531 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 04:43:43.540467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 04:43:43.543541 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 04:43:43.545549 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 04:43:43.546990 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 04:43:43.552396 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 04:43:43.555410 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 04:43:43.561242 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 04:43:43.568106 jq[1501]: false Jul 11 04:43:43.572779 extend-filesystems[1502]: Found /dev/vda6 Jul 11 04:43:43.573465 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 04:43:43.576221 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 04:43:43.576673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 04:43:43.578449 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 04:43:43.580614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 04:43:43.584770 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 04:43:43.586820 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 04:43:43.587389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 04:43:43.587649 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 04:43:43.587814 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 04:43:43.589017 extend-filesystems[1502]: Found /dev/vda9 Jul 11 04:43:43.590282 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 04:43:43.590633 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 04:43:43.590829 jq[1518]: true Jul 11 04:43:43.591975 extend-filesystems[1502]: Checking size of /dev/vda9 Jul 11 04:43:43.602413 jq[1526]: true Jul 11 04:43:43.611072 extend-filesystems[1502]: Resized partition /dev/vda9 Jul 11 04:43:43.621285 extend-filesystems[1540]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 04:43:43.620733 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 04:43:43.626325 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 04:43:43.631780 dbus-daemon[1499]: [system] SELinux support is enabled Jul 11 04:43:43.632039 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 04:43:43.636863 tar[1524]: linux-arm64/helm Jul 11 04:43:43.636873 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 04:43:43.636913 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 04:43:43.639041 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 04:43:43.639088 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 04:43:43.656350 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 04:43:43.674519 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 04:43:43.674519 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 04:43:43.674519 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 04:43:43.656616 (ntainerd)[1557]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 04:43:43.687744 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Jul 11 04:43:43.687807 extend-filesystems[1502]: Resized filesystem in /dev/vda9 Jul 11 04:43:43.674605 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 04:43:43.675953 systemd-logind[1514]: New seat seat0. Jul 11 04:43:43.676330 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 04:43:43.677412 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 04:43:43.695471 update_engine[1516]: I20250711 04:43:43.695147 1516 main.cc:92] Flatcar Update Engine starting Jul 11 04:43:43.700697 update_engine[1516]: I20250711 04:43:43.700199 1516 update_check_scheduler.cc:74] Next update check in 11m39s Jul 11 04:43:43.710180 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 04:43:43.712127 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 04:43:43.714764 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 04:43:43.740445 systemd[1]: Started update-engine.service - Update Engine. Jul 11 04:43:43.742831 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 04:43:43.746830 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 04:43:43.847488 locksmithd[1573]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 04:43:43.878921 containerd[1557]: time="2025-07-11T04:43:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 04:43:43.880127 containerd[1557]: time="2025-07-11T04:43:43.880064560Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 04:43:43.891704 containerd[1557]: time="2025-07-11T04:43:43.891608320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13µs" Jul 11 04:43:43.891704 containerd[1557]: time="2025-07-11T04:43:43.891665280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 04:43:43.891807 containerd[1557]: time="2025-07-11T04:43:43.891720080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 04:43:43.892027 containerd[1557]: time="2025-07-11T04:43:43.891985280Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892326680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892372280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892437960Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892449320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892645480Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892660280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892671040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892678240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892740520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892905120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893349 containerd[1557]: time="2025-07-11T04:43:43.892930080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 04:43:43.893558 containerd[1557]: time="2025-07-11T04:43:43.892939760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 04:43:43.893558 containerd[1557]: time="2025-07-11T04:43:43.892978760Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 04:43:43.893558 containerd[1557]: time="2025-07-11T04:43:43.893166160Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 04:43:43.893558 containerd[1557]: time="2025-07-11T04:43:43.893223760Z" level=info msg="metadata content store policy set" policy=shared Jul 11 04:43:43.896901 containerd[1557]: time="2025-07-11T04:43:43.896875080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 04:43:43.897020 containerd[1557]: time="2025-07-11T04:43:43.897004800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 04:43:43.897098 containerd[1557]: time="2025-07-11T04:43:43.897086240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 04:43:43.897150 containerd[1557]: time="2025-07-11T04:43:43.897138680Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 04:43:43.897202 containerd[1557]: time="2025-07-11T04:43:43.897189960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 04:43:43.897260 containerd[1557]: time="2025-07-11T04:43:43.897248680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 04:43:43.897337 containerd[1557]: time="2025-07-11T04:43:43.897323480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 04:43:43.897407 containerd[1557]: time="2025-07-11T04:43:43.897393480Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 04:43:43.897457 containerd[1557]: time="2025-07-11T04:43:43.897446120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 04:43:43.897506 containerd[1557]: time="2025-07-11T04:43:43.897493720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 04:43:43.897555 containerd[1557]: time="2025-07-11T04:43:43.897543800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 04:43:43.897606 containerd[1557]: time="2025-07-11T04:43:43.897594840Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 04:43:43.897758 containerd[1557]: time="2025-07-11T04:43:43.897738080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 04:43:43.897837 containerd[1557]: time="2025-07-11T04:43:43.897822760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 04:43:43.897897 containerd[1557]: time="2025-07-11T04:43:43.897885400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 04:43:43.897953 containerd[1557]: time="2025-07-11T04:43:43.897941680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 04:43:43.898004 containerd[1557]: time="2025-07-11T04:43:43.897992600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 04:43:43.898055 containerd[1557]: time="2025-07-11T04:43:43.898043560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 04:43:43.898117 containerd[1557]: time="2025-07-11T04:43:43.898103600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 04:43:43.898178 containerd[1557]: time="2025-07-11T04:43:43.898165920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 04:43:43.898226 containerd[1557]: time="2025-07-11T04:43:43.898215240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 04:43:43.898289 containerd[1557]: time="2025-07-11T04:43:43.898262840Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 04:43:43.898378 containerd[1557]: time="2025-07-11T04:43:43.898363280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 04:43:43.898704 containerd[1557]: time="2025-07-11T04:43:43.898688240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 04:43:43.898772 containerd[1557]: time="2025-07-11T04:43:43.898760840Z" level=info msg="Start snapshots syncer" Jul 11 04:43:43.898847 containerd[1557]: time="2025-07-11T04:43:43.898832760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 04:43:43.899107 containerd[1557]: time="2025-07-11T04:43:43.899070640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 04:43:43.899251 containerd[1557]: time="2025-07-11T04:43:43.899234360Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 04:43:43.900074 containerd[1557]: time="2025-07-11T04:43:43.900041480Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 04:43:43.900267 containerd[1557]: time="2025-07-11T04:43:43.900246800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 04:43:43.900367 containerd[1557]: time="2025-07-11T04:43:43.900351960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 04:43:43.900419 containerd[1557]: time="2025-07-11T04:43:43.900407160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 04:43:43.900474 containerd[1557]: time="2025-07-11T04:43:43.900461760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 04:43:43.900540 containerd[1557]: time="2025-07-11T04:43:43.900527280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 04:43:43.900588 containerd[1557]: time="2025-07-11T04:43:43.900576680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 04:43:43.900643 containerd[1557]: time="2025-07-11T04:43:43.900631240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 04:43:43.900740 containerd[1557]: time="2025-07-11T04:43:43.900724600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 04:43:43.900792 containerd[1557]: time="2025-07-11T04:43:43.900781360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 04:43:43.900846 containerd[1557]: time="2025-07-11T04:43:43.900834400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 04:43:43.900930 containerd[1557]: time="2025-07-11T04:43:43.900915400Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 04:43:43.900985 containerd[1557]: time="2025-07-11T04:43:43.900972440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 04:43:43.901029 containerd[1557]: time="2025-07-11T04:43:43.901017680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 04:43:43.901088 containerd[1557]: time="2025-07-11T04:43:43.901074880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 04:43:43.901131 containerd[1557]: time="2025-07-11T04:43:43.901120560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 04:43:43.901176 containerd[1557]: time="2025-07-11T04:43:43.901165480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 04:43:43.901224 containerd[1557]: time="2025-07-11T04:43:43.901212080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 04:43:43.901402 containerd[1557]: time="2025-07-11T04:43:43.901389600Z" level=info msg="runtime interface created" Jul 11 04:43:43.901444 containerd[1557]: time="2025-07-11T04:43:43.901435280Z" level=info msg="created NRI interface" Jul 11 04:43:43.901491 containerd[1557]: time="2025-07-11T04:43:43.901478880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 04:43:43.901552 containerd[1557]: time="2025-07-11T04:43:43.901541040Z" level=info msg="Connect containerd service" Jul 11 04:43:43.901622 containerd[1557]: time="2025-07-11T04:43:43.901610040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 04:43:43.902432 containerd[1557]: time="2025-07-11T04:43:43.902404360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 04:43:43.977356 tar[1524]: linux-arm64/LICENSE Jul 11 04:43:43.977356 tar[1524]: linux-arm64/README.md Jul 11 04:43:43.993363 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 04:43:44.011652 containerd[1557]: time="2025-07-11T04:43:44.011542830Z" level=info msg="Start subscribing containerd event" Jul 11 04:43:44.011652 containerd[1557]: time="2025-07-11T04:43:44.011656122Z" level=info msg="Start recovering state" Jul 11 04:43:44.011814 containerd[1557]: time="2025-07-11T04:43:44.011786575Z" level=info msg="Start event monitor" Jul 11 04:43:44.011814 containerd[1557]: time="2025-07-11T04:43:44.011810448Z" level=info msg="Start cni network conf syncer for default" Jul 11 04:43:44.011856 containerd[1557]: time="2025-07-11T04:43:44.011818164Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 04:43:44.011887 containerd[1557]: time="2025-07-11T04:43:44.011820776Z" level=info msg="Start streaming server" Jul 11 04:43:44.011926 containerd[1557]: time="2025-07-11T04:43:44.011870731Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 04:43:44.011926 containerd[1557]: time="2025-07-11T04:43:44.011895527Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 04:43:44.011926 containerd[1557]: time="2025-07-11T04:43:44.011914617Z" level=info msg="runtime interface starting up..." Jul 11 04:43:44.011926 containerd[1557]: time="2025-07-11T04:43:44.011920525Z" level=info msg="starting plugins..." Jul 11 04:43:44.011988 containerd[1557]: time="2025-07-11T04:43:44.011936600Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 04:43:44.012112 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 04:43:44.013537 containerd[1557]: time="2025-07-11T04:43:44.013260302Z" level=info msg="containerd successfully booted in 0.134672s" Jul 11 04:43:44.043069 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 04:43:44.061520 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 04:43:44.064402 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 04:43:44.081262 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 04:43:44.081499 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 04:43:44.083907 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 04:43:44.100249 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 04:43:44.104940 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 04:43:44.107043 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 04:43:44.108353 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 04:43:45.286596 systemd-networkd[1438]: eth0: Gained IPv6LL Jul 11 04:43:45.288824 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 04:43:45.290566 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 04:43:45.292852 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 04:43:45.295299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:43:45.305691 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 04:43:45.318727 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 04:43:45.318917 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 04:43:45.320798 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 04:43:45.323296 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 04:43:45.834731 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:43:45.836416 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 04:43:45.838406 systemd[1]: Startup finished in 2.088s (kernel) + 6.188s (initrd) + 4.082s (userspace) = 12.359s. Jul 11 04:43:45.838917 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 04:43:46.252914 kubelet[1637]: E0711 04:43:46.252804 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 04:43:46.255628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 04:43:46.255861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 04:43:46.257401 systemd[1]: kubelet.service: Consumed 807ms CPU time, 256.8M memory peak. Jul 11 04:43:48.135109 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 04:43:48.136153 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:51106.service - OpenSSH per-connection server daemon (10.0.0.1:51106). Jul 11 04:43:48.219376 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 51106 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:48.220781 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:48.228445 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 04:43:48.229301 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 04:43:48.234668 systemd-logind[1514]: New session 1 of user core. Jul 11 04:43:48.245349 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 04:43:48.247514 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 04:43:48.268166 (systemd)[1655]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 04:43:48.270505 systemd-logind[1514]: New session c1 of user core. Jul 11 04:43:48.372603 systemd[1655]: Queued start job for default target default.target. Jul 11 04:43:48.389157 systemd[1655]: Created slice app.slice - User Application Slice. Jul 11 04:43:48.389307 systemd[1655]: Reached target paths.target - Paths. Jul 11 04:43:48.389462 systemd[1655]: Reached target timers.target - Timers. Jul 11 04:43:48.390621 systemd[1655]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 04:43:48.399414 systemd[1655]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 04:43:48.399478 systemd[1655]: Reached target sockets.target - Sockets. Jul 11 04:43:48.399516 systemd[1655]: Reached target basic.target - Basic System. Jul 11 04:43:48.399547 systemd[1655]: Reached target default.target - Main User Target. Jul 11 04:43:48.399575 systemd[1655]: Startup finished in 124ms. Jul 11 04:43:48.399795 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 04:43:48.401296 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 04:43:48.465421 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:51118.service - OpenSSH per-connection server daemon (10.0.0.1:51118). Jul 11 04:43:48.516765 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 51118 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:48.517839 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:48.521544 systemd-logind[1514]: New session 2 of user core. Jul 11 04:43:48.536467 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 04:43:48.585590 sshd[1669]: Connection closed by 10.0.0.1 port 51118 Jul 11 04:43:48.585900 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Jul 11 04:43:48.603408 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:51118.service: Deactivated successfully. Jul 11 04:43:48.605564 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 04:43:48.607481 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Jul 11 04:43:48.609473 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:51134.service - OpenSSH per-connection server daemon (10.0.0.1:51134). Jul 11 04:43:48.609930 systemd-logind[1514]: Removed session 2. Jul 11 04:43:48.655186 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 51134 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:48.656153 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:48.659652 systemd-logind[1514]: New session 3 of user core. Jul 11 04:43:48.681476 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 04:43:48.729054 sshd[1678]: Connection closed by 10.0.0.1 port 51134 Jul 11 04:43:48.729347 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Jul 11 04:43:48.739266 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:51134.service: Deactivated successfully. Jul 11 04:43:48.740650 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 04:43:48.742913 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Jul 11 04:43:48.744919 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:51144.service - OpenSSH per-connection server daemon (10.0.0.1:51144). Jul 11 04:43:48.745401 systemd-logind[1514]: Removed session 3. Jul 11 04:43:48.789203 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 51144 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:48.790204 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:48.793630 systemd-logind[1514]: New session 4 of user core. Jul 11 04:43:48.804522 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 04:43:48.855210 sshd[1687]: Connection closed by 10.0.0.1 port 51144 Jul 11 04:43:48.855558 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Jul 11 04:43:48.866295 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:51144.service: Deactivated successfully. Jul 11 04:43:48.868570 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 04:43:48.869166 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Jul 11 04:43:48.871165 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:51156.service - OpenSSH per-connection server daemon (10.0.0.1:51156). Jul 11 04:43:48.871622 systemd-logind[1514]: Removed session 4. Jul 11 04:43:48.920436 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 51156 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:48.922376 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:48.925749 systemd-logind[1514]: New session 5 of user core. Jul 11 04:43:48.937490 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 04:43:48.999141 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 04:43:48.999435 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 04:43:49.020143 sudo[1697]: pam_unix(sudo:session): session closed for user root Jul 11 04:43:49.021878 sshd[1696]: Connection closed by 10.0.0.1 port 51156 Jul 11 04:43:49.021785 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Jul 11 04:43:49.036459 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:51156.service: Deactivated successfully. Jul 11 04:43:49.038524 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 04:43:49.040814 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Jul 11 04:43:49.042974 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:51160.service - OpenSSH per-connection server daemon (10.0.0.1:51160). Jul 11 04:43:49.043456 systemd-logind[1514]: Removed session 5. Jul 11 04:43:49.092964 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 51160 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:49.094044 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:49.097705 systemd-logind[1514]: New session 6 of user core. Jul 11 04:43:49.113528 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 04:43:49.163378 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 04:43:49.163870 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 04:43:49.167944 sudo[1708]: pam_unix(sudo:session): session closed for user root Jul 11 04:43:49.172361 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 04:43:49.172635 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 04:43:49.181224 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 04:43:49.211604 augenrules[1730]: No rules Jul 11 04:43:49.212772 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 04:43:49.214349 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 04:43:49.215136 sudo[1707]: pam_unix(sudo:session): session closed for user root Jul 11 04:43:49.216392 sshd[1706]: Connection closed by 10.0.0.1 port 51160 Jul 11 04:43:49.216525 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jul 11 04:43:49.227390 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:51160.service: Deactivated successfully. Jul 11 04:43:49.229592 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 04:43:49.230190 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Jul 11 04:43:49.231917 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:51166.service - OpenSSH per-connection server daemon (10.0.0.1:51166). Jul 11 04:43:49.232937 systemd-logind[1514]: Removed session 6. Jul 11 04:43:49.276194 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 51166 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:43:49.277182 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:43:49.280837 systemd-logind[1514]: New session 7 of user core. Jul 11 04:43:49.289462 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 04:43:49.339555 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 04:43:49.339801 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 04:43:49.682416 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 04:43:49.693687 (dockerd)[1764]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 04:43:49.930556 dockerd[1764]: time="2025-07-11T04:43:49.930498515Z" level=info msg="Starting up" Jul 11 04:43:49.931248 dockerd[1764]: time="2025-07-11T04:43:49.931222181Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 04:43:49.940712 dockerd[1764]: time="2025-07-11T04:43:49.940624949Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 04:43:50.072571 dockerd[1764]: time="2025-07-11T04:43:50.072523849Z" level=info msg="Loading containers: start." Jul 11 04:43:50.080341 kernel: Initializing XFRM netlink socket Jul 11 04:43:50.261294 systemd-networkd[1438]: docker0: Link UP Jul 11 04:43:50.264478 dockerd[1764]: time="2025-07-11T04:43:50.264440489Z" level=info msg="Loading containers: done." Jul 11 04:43:50.280219 dockerd[1764]: time="2025-07-11T04:43:50.280173130Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 04:43:50.280358 dockerd[1764]: time="2025-07-11T04:43:50.280248411Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 04:43:50.280358 dockerd[1764]: time="2025-07-11T04:43:50.280350497Z" level=info msg="Initializing buildkit" Jul 11 04:43:50.302842 dockerd[1764]: time="2025-07-11T04:43:50.302772718Z" level=info msg="Completed buildkit initialization" Jul 11 04:43:50.307339 dockerd[1764]: time="2025-07-11T04:43:50.307266279Z" level=info msg="Daemon has completed initialization" Jul 11 04:43:50.307787 dockerd[1764]: time="2025-07-11T04:43:50.307338951Z" level=info msg="API listen on /run/docker.sock" Jul 11 04:43:50.307486 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 04:43:50.953236 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3189284234-merged.mount: Deactivated successfully. Jul 11 04:43:51.213600 containerd[1557]: time="2025-07-11T04:43:51.213504250Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 04:43:51.936297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1526287027.mount: Deactivated successfully. Jul 11 04:43:53.195738 containerd[1557]: time="2025-07-11T04:43:53.195678699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:53.196670 containerd[1557]: time="2025-07-11T04:43:53.196625714Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 11 04:43:53.197285 containerd[1557]: time="2025-07-11T04:43:53.197234278Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:53.200246 containerd[1557]: time="2025-07-11T04:43:53.200205747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:53.201020 containerd[1557]: time="2025-07-11T04:43:53.200993182Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.987446447s" Jul 11 04:43:53.201071 containerd[1557]: time="2025-07-11T04:43:53.201029197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 11 04:43:53.204087 containerd[1557]: time="2025-07-11T04:43:53.204017511Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 04:43:54.764678 containerd[1557]: time="2025-07-11T04:43:54.764631658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:54.765463 containerd[1557]: time="2025-07-11T04:43:54.765425460Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 11 04:43:54.766258 containerd[1557]: time="2025-07-11T04:43:54.766210920Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:54.768882 containerd[1557]: time="2025-07-11T04:43:54.768832517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:54.769827 containerd[1557]: time="2025-07-11T04:43:54.769792810Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.56573002s" Jul 11 04:43:54.769892 containerd[1557]: time="2025-07-11T04:43:54.769829621Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 11 04:43:54.770263 containerd[1557]: time="2025-07-11T04:43:54.770229689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 04:43:56.155202 containerd[1557]: time="2025-07-11T04:43:56.155126504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:56.156496 containerd[1557]: time="2025-07-11T04:43:56.156464610Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 11 04:43:56.157222 containerd[1557]: time="2025-07-11T04:43:56.157176396Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:56.159869 containerd[1557]: time="2025-07-11T04:43:56.159813484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:56.160674 containerd[1557]: time="2025-07-11T04:43:56.160644008Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.390382324s" Jul 11 04:43:56.160795 containerd[1557]: time="2025-07-11T04:43:56.160746431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 11 04:43:56.161223 containerd[1557]: time="2025-07-11T04:43:56.161190435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 04:43:56.278703 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 04:43:56.280052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:43:56.427143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:43:56.430835 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 04:43:56.473929 kubelet[2054]: E0711 04:43:56.473875 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 04:43:56.476733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 04:43:56.476879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 04:43:56.477370 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.8M memory peak. Jul 11 04:43:57.574635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899283089.mount: Deactivated successfully. Jul 11 04:43:58.107367 containerd[1557]: time="2025-07-11T04:43:58.107298601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:58.108197 containerd[1557]: time="2025-07-11T04:43:58.107993288Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 11 04:43:58.108945 containerd[1557]: time="2025-07-11T04:43:58.108914447Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:58.111347 containerd[1557]: time="2025-07-11T04:43:58.111292227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:58.111939 containerd[1557]: time="2025-07-11T04:43:58.111770100Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.950551045s" Jul 11 04:43:58.111939 containerd[1557]: time="2025-07-11T04:43:58.111890530Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 11 04:43:58.112954 containerd[1557]: time="2025-07-11T04:43:58.112753217Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 04:43:58.741748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217824165.mount: Deactivated successfully. Jul 11 04:43:59.792034 containerd[1557]: time="2025-07-11T04:43:59.791991118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:59.792878 containerd[1557]: time="2025-07-11T04:43:59.792841361Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 04:43:59.793643 containerd[1557]: time="2025-07-11T04:43:59.793615707Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:59.796091 containerd[1557]: time="2025-07-11T04:43:59.796056958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:43:59.797273 containerd[1557]: time="2025-07-11T04:43:59.797158370Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.684369966s" Jul 11 04:43:59.797273 containerd[1557]: time="2025-07-11T04:43:59.797190187Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 04:43:59.797806 containerd[1557]: time="2025-07-11T04:43:59.797789781Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 04:44:00.333772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280543277.mount: Deactivated successfully. Jul 11 04:44:00.338227 containerd[1557]: time="2025-07-11T04:44:00.338181642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 04:44:00.338801 containerd[1557]: time="2025-07-11T04:44:00.338769469Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 04:44:00.340006 containerd[1557]: time="2025-07-11T04:44:00.339972769Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 04:44:00.342560 containerd[1557]: time="2025-07-11T04:44:00.342526616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 04:44:00.343727 containerd[1557]: time="2025-07-11T04:44:00.343695618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 545.832708ms" Jul 11 04:44:00.343760 containerd[1557]: time="2025-07-11T04:44:00.343725829Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 04:44:00.344130 containerd[1557]: time="2025-07-11T04:44:00.344098895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 04:44:01.040072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171285911.mount: Deactivated successfully. Jul 11 04:44:03.471031 containerd[1557]: time="2025-07-11T04:44:03.470981366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:03.471767 containerd[1557]: time="2025-07-11T04:44:03.471736932Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 11 04:44:03.472483 containerd[1557]: time="2025-07-11T04:44:03.472454765Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:03.475889 containerd[1557]: time="2025-07-11T04:44:03.475844295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:03.476549 containerd[1557]: time="2025-07-11T04:44:03.476509415Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.13237907s" Jul 11 04:44:03.476549 containerd[1557]: time="2025-07-11T04:44:03.476548589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 11 04:44:06.528730 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 04:44:06.530168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:44:06.713697 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:44:06.723629 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 04:44:06.756378 kubelet[2213]: E0711 04:44:06.756294 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 04:44:06.758738 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 04:44:06.758873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 04:44:06.759204 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.3M memory peak. Jul 11 04:44:09.020850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:44:09.021004 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.3M memory peak. Jul 11 04:44:09.022938 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:44:09.043703 systemd[1]: Reload requested from client PID 2229 ('systemctl') (unit session-7.scope)... Jul 11 04:44:09.043720 systemd[1]: Reloading... Jul 11 04:44:09.121404 zram_generator::config[2269]: No configuration found. Jul 11 04:44:09.216632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 04:44:09.318678 systemd[1]: Reloading finished in 274 ms. Jul 11 04:44:09.384746 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 04:44:09.384970 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 04:44:09.385298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:44:09.386403 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95M memory peak. Jul 11 04:44:09.387804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:44:09.509928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:44:09.514679 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 04:44:09.548476 kubelet[2316]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 04:44:09.548476 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 04:44:09.548476 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 04:44:09.548805 kubelet[2316]: I0711 04:44:09.548518 2316 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 04:44:10.472135 kubelet[2316]: I0711 04:44:10.472091 2316 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 04:44:10.472267 kubelet[2316]: I0711 04:44:10.472257 2316 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 04:44:10.472604 kubelet[2316]: I0711 04:44:10.472584 2316 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 04:44:10.511342 kubelet[2316]: E0711 04:44:10.511280 2316 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 11 04:44:10.512095 kubelet[2316]: I0711 04:44:10.512057 2316 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 04:44:10.521008 kubelet[2316]: I0711 04:44:10.520757 2316 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 04:44:10.524749 kubelet[2316]: I0711 04:44:10.524716 2316 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 04:44:10.525045 kubelet[2316]: I0711 04:44:10.525017 2316 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 04:44:10.525153 kubelet[2316]: I0711 04:44:10.525128 2316 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 04:44:10.525304 kubelet[2316]: I0711 04:44:10.525154 2316 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 04:44:10.525392 kubelet[2316]: I0711 04:44:10.525384 2316 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 04:44:10.525392 kubelet[2316]: I0711 04:44:10.525392 2316 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 04:44:10.525641 kubelet[2316]: I0711 04:44:10.525617 2316 state_mem.go:36] "Initialized new in-memory state store" Jul 11 04:44:10.527637 kubelet[2316]: I0711 04:44:10.527441 2316 kubelet.go:408] "Attempting to sync node with API server" Jul 11 04:44:10.527637 kubelet[2316]: I0711 04:44:10.527467 2316 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 04:44:10.527637 kubelet[2316]: I0711 04:44:10.527489 2316 kubelet.go:314] "Adding apiserver pod source" Jul 11 04:44:10.527637 kubelet[2316]: I0711 04:44:10.527562 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 04:44:10.536044 kubelet[2316]: I0711 04:44:10.535746 2316 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 04:44:10.536635 kubelet[2316]: I0711 04:44:10.536616 2316 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 04:44:10.536824 kubelet[2316]: W0711 04:44:10.536795 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 04:44:10.538001 kubelet[2316]: I0711 04:44:10.537851 2316 server.go:1274] "Started kubelet" Jul 11 04:44:10.538327 kubelet[2316]: W0711 04:44:10.538265 2316 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 11 04:44:10.538461 kubelet[2316]: E0711 04:44:10.538440 2316 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 11 04:44:10.539080 kubelet[2316]: I0711 04:44:10.539040 2316 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 04:44:10.540004 kubelet[2316]: I0711 04:44:10.539969 2316 server.go:449] "Adding debug handlers to kubelet server" Jul 11 04:44:10.540231 kubelet[2316]: I0711 04:44:10.540211 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 04:44:10.540438 kubelet[2316]: W0711 04:44:10.540384 2316 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 11 04:44:10.540477 kubelet[2316]: E0711 04:44:10.540444 2316 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 11 04:44:10.540614 kubelet[2316]: I0711 04:44:10.540569 2316 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 04:44:10.540835 kubelet[2316]: I0711 04:44:10.540814 2316 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 04:44:10.541304 kubelet[2316]: I0711 04:44:10.541283 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 04:44:10.542349 kubelet[2316]: I0711 04:44:10.541762 2316 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 04:44:10.542349 kubelet[2316]: E0711 04:44:10.541785 2316 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 04:44:10.542349 kubelet[2316]: I0711 04:44:10.541872 2316 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 04:44:10.542349 kubelet[2316]: I0711 04:44:10.541918 2316 reconciler.go:26] "Reconciler: start to sync state" Jul 11 04:44:10.542489 kubelet[2316]: W0711 04:44:10.542360 2316 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 11 04:44:10.542489 kubelet[2316]: E0711 04:44:10.542400 2316 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 11 04:44:10.542989 kubelet[2316]: E0711 04:44:10.542644 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jul 11 04:44:10.543730 kubelet[2316]: I0711 04:44:10.543702 2316 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 04:44:10.548537 kubelet[2316]: E0711 04:44:10.548505 2316 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 04:44:10.548821 kubelet[2316]: I0711 04:44:10.548714 2316 factory.go:221] Registration of the containerd container factory successfully Jul 11 04:44:10.548821 kubelet[2316]: I0711 04:44:10.548728 2316 factory.go:221] Registration of the systemd container factory successfully Jul 11 04:44:10.550132 kubelet[2316]: E0711 04:44:10.548355 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185118d90d8824be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 04:44:10.537829566 +0000 UTC m=+1.020077093,LastTimestamp:2025-07-11 04:44:10.537829566 +0000 UTC m=+1.020077093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 04:44:10.560099 kubelet[2316]: I0711 04:44:10.560073 2316 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 04:44:10.560099 kubelet[2316]: I0711 04:44:10.560094 2316 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 04:44:10.560197 kubelet[2316]: I0711 04:44:10.560112 2316 state_mem.go:36] "Initialized new in-memory state store" Jul 11 04:44:10.562926 kubelet[2316]: I0711 04:44:10.562888 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 04:44:10.563873 kubelet[2316]: I0711 04:44:10.563852 2316 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 04:44:10.563873 kubelet[2316]: I0711 04:44:10.563875 2316 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 04:44:10.564456 kubelet[2316]: I0711 04:44:10.563977 2316 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 04:44:10.564456 kubelet[2316]: E0711 04:44:10.564021 2316 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 04:44:10.564739 kubelet[2316]: W0711 04:44:10.564716 2316 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jul 11 04:44:10.564874 kubelet[2316]: E0711 04:44:10.564853 2316 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jul 11 04:44:10.642032 kubelet[2316]: E0711 04:44:10.641983 2316 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 04:44:10.664362 kubelet[2316]: E0711 04:44:10.664299 2316 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 04:44:10.719841 kubelet[2316]: I0711 04:44:10.719796 2316 policy_none.go:49] "None policy: Start" Jul 11 04:44:10.720632 kubelet[2316]: I0711 04:44:10.720610 2316 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 04:44:10.720686 kubelet[2316]: I0711 04:44:10.720640 2316 state_mem.go:35] "Initializing new in-memory state store" Jul 11 04:44:10.738808 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 04:44:10.742456 kubelet[2316]: E0711 04:44:10.742424 2316 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 04:44:10.743851 kubelet[2316]: E0711 04:44:10.743813 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jul 11 04:44:10.751701 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 04:44:10.754839 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 04:44:10.765263 kubelet[2316]: I0711 04:44:10.765110 2316 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 04:44:10.765519 kubelet[2316]: I0711 04:44:10.765503 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 04:44:10.765629 kubelet[2316]: I0711 04:44:10.765521 2316 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 04:44:10.766227 kubelet[2316]: I0711 04:44:10.766180 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 04:44:10.767075 kubelet[2316]: E0711 04:44:10.767052 2316 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 04:44:10.866858 kubelet[2316]: I0711 04:44:10.866513 2316 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 04:44:10.867891 kubelet[2316]: E0711 04:44:10.867839 2316 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 11 04:44:10.873375 systemd[1]: Created slice kubepods-burstable-podf5511dcc5e35f0c0d5c300e34148fce9.slice - libcontainer container kubepods-burstable-podf5511dcc5e35f0c0d5c300e34148fce9.slice. Jul 11 04:44:10.892677 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 11 04:44:10.902703 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 11 04:44:10.943426 kubelet[2316]: I0711 04:44:10.943381 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5511dcc5e35f0c0d5c300e34148fce9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5511dcc5e35f0c0d5c300e34148fce9\") " pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:10.943507 kubelet[2316]: I0711 04:44:10.943437 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:10.943507 kubelet[2316]: I0711 04:44:10.943483 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:10.943559 kubelet[2316]: I0711 04:44:10.943511 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:10.943559 kubelet[2316]: I0711 04:44:10.943536 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 04:44:10.943604 kubelet[2316]: I0711 04:44:10.943574 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5511dcc5e35f0c0d5c300e34148fce9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5511dcc5e35f0c0d5c300e34148fce9\") " pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:10.943628 kubelet[2316]: I0711 04:44:10.943604 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5511dcc5e35f0c0d5c300e34148fce9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5511dcc5e35f0c0d5c300e34148fce9\") " pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:10.943647 kubelet[2316]: I0711 04:44:10.943626 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:10.943647 kubelet[2316]: I0711 04:44:10.943643 2316 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:11.069377 kubelet[2316]: I0711 04:44:11.069299 2316 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 04:44:11.069647 kubelet[2316]: E0711 04:44:11.069621 2316 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 11 04:44:11.144236 kubelet[2316]: E0711 04:44:11.144197 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jul 11 04:44:11.191657 kubelet[2316]: E0711 04:44:11.191565 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.192202 containerd[1557]: time="2025-07-11T04:44:11.192163857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5511dcc5e35f0c0d5c300e34148fce9,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:11.201442 kubelet[2316]: E0711 04:44:11.201389 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.201841 containerd[1557]: time="2025-07-11T04:44:11.201799416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:11.205068 kubelet[2316]: E0711 04:44:11.205037 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.205433 containerd[1557]: time="2025-07-11T04:44:11.205403314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:11.306447 containerd[1557]: time="2025-07-11T04:44:11.306401228Z" level=info msg="connecting to shim afb0e1dad472133dc5aa753847ec8ad90e389faca5731f1cea48832c099f0cef" address="unix:///run/containerd/s/515c4d645e8512f008f1c39dfa87c309fe7f966ac725f3f37c4e8cbf383f599a" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:11.307561 containerd[1557]: time="2025-07-11T04:44:11.307526277Z" level=info msg="connecting to shim 9e43b6d6314dc90f14026bb471affec513f06b88b5ba2d1956938bba5f36935b" address="unix:///run/containerd/s/011261f36b47c2852b5049a0fe849b541c024a6bdf301a5e3df2830321ef93de" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:11.310364 containerd[1557]: time="2025-07-11T04:44:11.309980665Z" level=info msg="connecting to shim a7ea8e40774275c8433fa74512c52add8d1837cce72aaa9dca37264d54a56673" address="unix:///run/containerd/s/b8e16de681780a041d7566ca69e1755fc51778949bdc4c7f982963ed448b8171" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:11.338547 systemd[1]: Started cri-containerd-afb0e1dad472133dc5aa753847ec8ad90e389faca5731f1cea48832c099f0cef.scope - libcontainer container afb0e1dad472133dc5aa753847ec8ad90e389faca5731f1cea48832c099f0cef. Jul 11 04:44:11.342397 systemd[1]: Started cri-containerd-9e43b6d6314dc90f14026bb471affec513f06b88b5ba2d1956938bba5f36935b.scope - libcontainer container 9e43b6d6314dc90f14026bb471affec513f06b88b5ba2d1956938bba5f36935b. Jul 11 04:44:11.343702 systemd[1]: Started cri-containerd-a7ea8e40774275c8433fa74512c52add8d1837cce72aaa9dca37264d54a56673.scope - libcontainer container a7ea8e40774275c8433fa74512c52add8d1837cce72aaa9dca37264d54a56673. Jul 11 04:44:11.379955 containerd[1557]: time="2025-07-11T04:44:11.379887254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f5511dcc5e35f0c0d5c300e34148fce9,Namespace:kube-system,Attempt:0,} returns sandbox id \"afb0e1dad472133dc5aa753847ec8ad90e389faca5731f1cea48832c099f0cef\"" Jul 11 04:44:11.381747 kubelet[2316]: E0711 04:44:11.381720 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.382003 containerd[1557]: time="2025-07-11T04:44:11.381973898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e43b6d6314dc90f14026bb471affec513f06b88b5ba2d1956938bba5f36935b\"" Jul 11 04:44:11.382753 kubelet[2316]: E0711 04:44:11.382710 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.385848 containerd[1557]: time="2025-07-11T04:44:11.385773237Z" level=info msg="CreateContainer within sandbox \"9e43b6d6314dc90f14026bb471affec513f06b88b5ba2d1956938bba5f36935b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 04:44:11.386747 containerd[1557]: time="2025-07-11T04:44:11.386205634Z" level=info msg="CreateContainer within sandbox \"afb0e1dad472133dc5aa753847ec8ad90e389faca5731f1cea48832c099f0cef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 04:44:11.388823 containerd[1557]: time="2025-07-11T04:44:11.388781882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7ea8e40774275c8433fa74512c52add8d1837cce72aaa9dca37264d54a56673\"" Jul 11 04:44:11.389530 kubelet[2316]: E0711 04:44:11.389509 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.391172 containerd[1557]: time="2025-07-11T04:44:11.391147476Z" level=info msg="CreateContainer within sandbox \"a7ea8e40774275c8433fa74512c52add8d1837cce72aaa9dca37264d54a56673\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 04:44:11.395783 containerd[1557]: time="2025-07-11T04:44:11.395750119Z" level=info msg="Container 3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:11.400360 containerd[1557]: time="2025-07-11T04:44:11.399787214Z" level=info msg="Container e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:11.404190 containerd[1557]: time="2025-07-11T04:44:11.404157824Z" level=info msg="Container a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:11.406224 containerd[1557]: time="2025-07-11T04:44:11.406192025Z" level=info msg="CreateContainer within sandbox \"9e43b6d6314dc90f14026bb471affec513f06b88b5ba2d1956938bba5f36935b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509\"" Jul 11 04:44:11.406977 containerd[1557]: time="2025-07-11T04:44:11.406933077Z" level=info msg="StartContainer for \"3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509\"" Jul 11 04:44:11.407948 containerd[1557]: time="2025-07-11T04:44:11.407915849Z" level=info msg="connecting to shim 3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509" address="unix:///run/containerd/s/011261f36b47c2852b5049a0fe849b541c024a6bdf301a5e3df2830321ef93de" protocol=ttrpc version=3 Jul 11 04:44:11.408592 containerd[1557]: time="2025-07-11T04:44:11.408561542Z" level=info msg="CreateContainer within sandbox \"afb0e1dad472133dc5aa753847ec8ad90e389faca5731f1cea48832c099f0cef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960\"" Jul 11 04:44:11.409273 containerd[1557]: time="2025-07-11T04:44:11.409238421Z" level=info msg="StartContainer for \"e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960\"" Jul 11 04:44:11.410477 containerd[1557]: time="2025-07-11T04:44:11.410441775Z" level=info msg="connecting to shim e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960" address="unix:///run/containerd/s/515c4d645e8512f008f1c39dfa87c309fe7f966ac725f3f37c4e8cbf383f599a" protocol=ttrpc version=3 Jul 11 04:44:11.411808 containerd[1557]: time="2025-07-11T04:44:11.411718990Z" level=info msg="CreateContainer within sandbox \"a7ea8e40774275c8433fa74512c52add8d1837cce72aaa9dca37264d54a56673\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8\"" Jul 11 04:44:11.412238 containerd[1557]: time="2025-07-11T04:44:11.412202069Z" level=info msg="StartContainer for \"a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8\"" Jul 11 04:44:11.413168 containerd[1557]: time="2025-07-11T04:44:11.413129356Z" level=info msg="connecting to shim a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8" address="unix:///run/containerd/s/b8e16de681780a041d7566ca69e1755fc51778949bdc4c7f982963ed448b8171" protocol=ttrpc version=3 Jul 11 04:44:11.434555 systemd[1]: Started cri-containerd-3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509.scope - libcontainer container 3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509. Jul 11 04:44:11.435517 systemd[1]: Started cri-containerd-e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960.scope - libcontainer container e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960. Jul 11 04:44:11.439565 systemd[1]: Started cri-containerd-a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8.scope - libcontainer container a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8. Jul 11 04:44:11.473595 kubelet[2316]: I0711 04:44:11.472990 2316 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 04:44:11.473595 kubelet[2316]: E0711 04:44:11.473343 2316 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 11 04:44:11.479069 containerd[1557]: time="2025-07-11T04:44:11.477677639Z" level=info msg="StartContainer for \"e7d24774d33280736c4eaf6f79f672002699a1ae495272d12bde16d460d02960\" returns successfully" Jul 11 04:44:11.484693 containerd[1557]: time="2025-07-11T04:44:11.484650639Z" level=info msg="StartContainer for \"3a0acef8dfb629f614a536d6857b1e823c0b4e1b3d8ca4fc4605a349ba138509\" returns successfully" Jul 11 04:44:11.496905 containerd[1557]: time="2025-07-11T04:44:11.496871695Z" level=info msg="StartContainer for \"a1576431e34ac750da75c429615f568c49229fc100f637d18597a8992d498eb8\" returns successfully" Jul 11 04:44:11.571363 kubelet[2316]: E0711 04:44:11.570633 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.572956 kubelet[2316]: E0711 04:44:11.572930 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:11.573577 kubelet[2316]: E0711 04:44:11.573555 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:12.275409 kubelet[2316]: I0711 04:44:12.275284 2316 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 04:44:12.575376 kubelet[2316]: E0711 04:44:12.575346 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:13.070885 kubelet[2316]: E0711 04:44:13.070851 2316 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:13.440663 kubelet[2316]: E0711 04:44:13.440561 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 04:44:13.538065 kubelet[2316]: E0711 04:44:13.537952 2316 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.185118d90d8824be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 04:44:10.537829566 +0000 UTC m=+1.020077093,LastTimestamp:2025-07-11 04:44:10.537829566 +0000 UTC m=+1.020077093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 04:44:13.539025 kubelet[2316]: I0711 04:44:13.538983 2316 apiserver.go:52] "Watching apiserver" Jul 11 04:44:13.542007 kubelet[2316]: I0711 04:44:13.541979 2316 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 04:44:13.599420 kubelet[2316]: I0711 04:44:13.599372 2316 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 04:44:15.470621 systemd[1]: Reload requested from client PID 2590 ('systemctl') (unit session-7.scope)... Jul 11 04:44:15.470637 systemd[1]: Reloading... Jul 11 04:44:15.546336 zram_generator::config[2633]: No configuration found. Jul 11 04:44:15.623383 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 04:44:15.738668 systemd[1]: Reloading finished in 267 ms. Jul 11 04:44:15.763894 kubelet[2316]: I0711 04:44:15.763851 2316 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 04:44:15.764547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:44:15.775731 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 04:44:15.777361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:44:15.777406 systemd[1]: kubelet.service: Consumed 1.443s CPU time, 128.3M memory peak. Jul 11 04:44:15.779457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 04:44:15.923452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 04:44:15.927985 (kubelet)[2675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 04:44:15.964812 kubelet[2675]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 04:44:15.964812 kubelet[2675]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 04:44:15.964812 kubelet[2675]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 04:44:15.964812 kubelet[2675]: I0711 04:44:15.964485 2675 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 04:44:15.970048 kubelet[2675]: I0711 04:44:15.970011 2675 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 04:44:15.970048 kubelet[2675]: I0711 04:44:15.970036 2675 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 04:44:15.970249 kubelet[2675]: I0711 04:44:15.970223 2675 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 04:44:15.971497 kubelet[2675]: I0711 04:44:15.971476 2675 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 04:44:15.973595 kubelet[2675]: I0711 04:44:15.973573 2675 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 04:44:15.976764 kubelet[2675]: I0711 04:44:15.976739 2675 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 04:44:15.979712 kubelet[2675]: I0711 04:44:15.979674 2675 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 04:44:15.979838 kubelet[2675]: I0711 04:44:15.979812 2675 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 04:44:15.979939 kubelet[2675]: I0711 04:44:15.979908 2675 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 04:44:15.980217 kubelet[2675]: I0711 04:44:15.979937 2675 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 04:44:15.980308 kubelet[2675]: I0711 04:44:15.980222 2675 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 04:44:15.980308 kubelet[2675]: I0711 04:44:15.980233 2675 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 04:44:15.980308 kubelet[2675]: I0711 04:44:15.980267 2675 state_mem.go:36] "Initialized new in-memory state store" Jul 11 04:44:15.980390 kubelet[2675]: I0711 04:44:15.980376 2675 kubelet.go:408] "Attempting to sync node with API server" Jul 11 04:44:15.980410 kubelet[2675]: I0711 04:44:15.980394 2675 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 04:44:15.980431 kubelet[2675]: I0711 04:44:15.980412 2675 kubelet.go:314] "Adding apiserver pod source" Jul 11 04:44:15.980431 kubelet[2675]: I0711 04:44:15.980425 2675 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 04:44:15.980918 kubelet[2675]: I0711 04:44:15.980892 2675 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 04:44:15.981505 kubelet[2675]: I0711 04:44:15.981478 2675 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 04:44:15.982003 kubelet[2675]: I0711 04:44:15.981980 2675 server.go:1274] "Started kubelet" Jul 11 04:44:15.982744 kubelet[2675]: I0711 04:44:15.982702 2675 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 04:44:15.983515 kubelet[2675]: I0711 04:44:15.983371 2675 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 04:44:15.983869 kubelet[2675]: I0711 04:44:15.983846 2675 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 04:44:15.984482 kubelet[2675]: I0711 04:44:15.984462 2675 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 04:44:15.985921 kubelet[2675]: I0711 04:44:15.985893 2675 server.go:449] "Adding debug handlers to kubelet server" Jul 11 04:44:15.987117 kubelet[2675]: I0711 04:44:15.986807 2675 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 04:44:15.987117 kubelet[2675]: E0711 04:44:15.987020 2675 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 04:44:15.987117 kubelet[2675]: I0711 04:44:15.987043 2675 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 04:44:15.987615 kubelet[2675]: I0711 04:44:15.987593 2675 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 04:44:15.987827 kubelet[2675]: I0711 04:44:15.987811 2675 reconciler.go:26] "Reconciler: start to sync state" Jul 11 04:44:15.991541 kubelet[2675]: I0711 04:44:15.991446 2675 factory.go:221] Registration of the systemd container factory successfully Jul 11 04:44:15.993332 kubelet[2675]: I0711 04:44:15.993287 2675 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 04:44:16.000115 kubelet[2675]: E0711 04:44:15.999136 2675 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 04:44:16.000711 kubelet[2675]: I0711 04:44:16.000691 2675 factory.go:221] Registration of the containerd container factory successfully Jul 11 04:44:16.002194 kubelet[2675]: I0711 04:44:16.002125 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 04:44:16.004137 kubelet[2675]: I0711 04:44:16.004117 2675 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 04:44:16.004215 kubelet[2675]: I0711 04:44:16.004206 2675 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 04:44:16.004292 kubelet[2675]: I0711 04:44:16.004283 2675 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 04:44:16.004404 kubelet[2675]: E0711 04:44:16.004385 2675 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 04:44:16.032138 kubelet[2675]: I0711 04:44:16.032116 2675 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 04:44:16.032610 kubelet[2675]: I0711 04:44:16.032340 2675 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 04:44:16.032610 kubelet[2675]: I0711 04:44:16.032368 2675 state_mem.go:36] "Initialized new in-memory state store" Jul 11 04:44:16.032610 kubelet[2675]: I0711 04:44:16.032505 2675 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 04:44:16.032610 kubelet[2675]: I0711 04:44:16.032516 2675 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 04:44:16.032610 kubelet[2675]: I0711 04:44:16.032533 2675 policy_none.go:49] "None policy: Start" Jul 11 04:44:16.033328 kubelet[2675]: I0711 04:44:16.033296 2675 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 04:44:16.033462 kubelet[2675]: I0711 04:44:16.033450 2675 state_mem.go:35] "Initializing new in-memory state store" Jul 11 04:44:16.033639 kubelet[2675]: I0711 04:44:16.033625 2675 state_mem.go:75] "Updated machine memory state" Jul 11 04:44:16.037204 kubelet[2675]: I0711 04:44:16.037181 2675 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 04:44:16.037757 kubelet[2675]: I0711 04:44:16.037738 2675 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 04:44:16.037803 kubelet[2675]: I0711 04:44:16.037752 2675 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 04:44:16.038015 kubelet[2675]: I0711 04:44:16.037989 2675 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 04:44:16.141922 kubelet[2675]: I0711 04:44:16.141875 2675 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 04:44:16.147923 kubelet[2675]: I0711 04:44:16.147882 2675 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 04:44:16.148021 kubelet[2675]: I0711 04:44:16.147955 2675 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 04:44:16.189056 kubelet[2675]: I0711 04:44:16.188938 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:16.189056 kubelet[2675]: I0711 04:44:16.188975 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:16.189056 kubelet[2675]: I0711 04:44:16.188999 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 04:44:16.189056 kubelet[2675]: I0711 04:44:16.189015 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5511dcc5e35f0c0d5c300e34148fce9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5511dcc5e35f0c0d5c300e34148fce9\") " pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:16.189056 kubelet[2675]: I0711 04:44:16.189035 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:16.189252 kubelet[2675]: I0711 04:44:16.189076 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:16.189252 kubelet[2675]: I0711 04:44:16.189119 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 04:44:16.189252 kubelet[2675]: I0711 04:44:16.189143 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5511dcc5e35f0c0d5c300e34148fce9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f5511dcc5e35f0c0d5c300e34148fce9\") " pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:16.189252 kubelet[2675]: I0711 04:44:16.189158 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5511dcc5e35f0c0d5c300e34148fce9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f5511dcc5e35f0c0d5c300e34148fce9\") " pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:16.413663 kubelet[2675]: E0711 04:44:16.413627 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:16.413663 kubelet[2675]: E0711 04:44:16.413666 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:16.413882 kubelet[2675]: E0711 04:44:16.413637 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:16.476181 sudo[2710]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 11 04:44:16.476876 sudo[2710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 11 04:44:16.789162 sudo[2710]: pam_unix(sudo:session): session closed for user root Jul 11 04:44:16.982415 kubelet[2675]: I0711 04:44:16.982386 2675 apiserver.go:52] "Watching apiserver" Jul 11 04:44:16.987890 kubelet[2675]: I0711 04:44:16.987848 2675 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 04:44:17.022517 kubelet[2675]: E0711 04:44:17.022429 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:17.022517 kubelet[2675]: E0711 04:44:17.022449 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:17.028342 kubelet[2675]: E0711 04:44:17.027879 2675 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 04:44:17.028342 kubelet[2675]: E0711 04:44:17.028064 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:17.051081 kubelet[2675]: I0711 04:44:17.050941 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.050925875 podStartE2EDuration="1.050925875s" podCreationTimestamp="2025-07-11 04:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:44:17.043941836 +0000 UTC m=+1.111957879" watchObservedRunningTime="2025-07-11 04:44:17.050925875 +0000 UTC m=+1.118941918" Jul 11 04:44:17.058224 kubelet[2675]: I0711 04:44:17.058130 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.058114948 podStartE2EDuration="1.058114948s" podCreationTimestamp="2025-07-11 04:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:44:17.051570956 +0000 UTC m=+1.119586999" watchObservedRunningTime="2025-07-11 04:44:17.058114948 +0000 UTC m=+1.126130951" Jul 11 04:44:17.065979 kubelet[2675]: I0711 04:44:17.065694 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.065681633 podStartE2EDuration="1.065681633s" podCreationTimestamp="2025-07-11 04:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:44:17.058238577 +0000 UTC m=+1.126254620" watchObservedRunningTime="2025-07-11 04:44:17.065681633 +0000 UTC m=+1.133697636" Jul 11 04:44:18.024363 kubelet[2675]: E0711 04:44:18.024262 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:18.341699 sudo[1743]: pam_unix(sudo:session): session closed for user root Jul 11 04:44:18.343431 sshd[1742]: Connection closed by 10.0.0.1 port 51166 Jul 11 04:44:18.343910 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:18.347723 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:51166.service: Deactivated successfully. Jul 11 04:44:18.349970 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 04:44:18.350336 systemd[1]: session-7.scope: Consumed 7.308s CPU time, 260.2M memory peak. Jul 11 04:44:18.351549 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Jul 11 04:44:18.353089 systemd-logind[1514]: Removed session 7. Jul 11 04:44:20.138947 kubelet[2675]: I0711 04:44:20.138748 2675 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 04:44:20.139597 containerd[1557]: time="2025-07-11T04:44:20.139494545Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 04:44:20.139847 kubelet[2675]: I0711 04:44:20.139705 2675 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 04:44:20.922722 kubelet[2675]: I0711 04:44:20.922426 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1a82fb3-1252-4fcd-b87d-8bf586fcb481-xtables-lock\") pod \"kube-proxy-sxg78\" (UID: \"e1a82fb3-1252-4fcd-b87d-8bf586fcb481\") " pod="kube-system/kube-proxy-sxg78" Jul 11 04:44:20.922722 kubelet[2675]: I0711 04:44:20.922473 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-hostproc\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:20.922722 kubelet[2675]: I0711 04:44:20.922492 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-run\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:20.922722 kubelet[2675]: I0711 04:44:20.922506 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cni-path\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:20.922722 kubelet[2675]: I0711 04:44:20.922523 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e1a82fb3-1252-4fcd-b87d-8bf586fcb481-kube-proxy\") pod \"kube-proxy-sxg78\" (UID: \"e1a82fb3-1252-4fcd-b87d-8bf586fcb481\") " pod="kube-system/kube-proxy-sxg78" Jul 11 04:44:20.922722 kubelet[2675]: I0711 04:44:20.922539 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jb4\" (UniqueName: \"kubernetes.io/projected/e1a82fb3-1252-4fcd-b87d-8bf586fcb481-kube-api-access-r9jb4\") pod \"kube-proxy-sxg78\" (UID: \"e1a82fb3-1252-4fcd-b87d-8bf586fcb481\") " pod="kube-system/kube-proxy-sxg78" Jul 11 04:44:20.922958 kubelet[2675]: I0711 04:44:20.922555 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-bpf-maps\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:20.922958 kubelet[2675]: I0711 04:44:20.922569 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-cgroup\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:20.922958 kubelet[2675]: I0711 04:44:20.922595 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1a82fb3-1252-4fcd-b87d-8bf586fcb481-lib-modules\") pod \"kube-proxy-sxg78\" (UID: \"e1a82fb3-1252-4fcd-b87d-8bf586fcb481\") " pod="kube-system/kube-proxy-sxg78" Jul 11 04:44:20.922958 kubelet[2675]: I0711 04:44:20.922608 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-etc-cni-netd\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:20.933233 systemd[1]: Created slice kubepods-besteffort-pode1a82fb3_1252_4fcd_b87d_8bf586fcb481.slice - libcontainer container kubepods-besteffort-pode1a82fb3_1252_4fcd_b87d_8bf586fcb481.slice. Jul 11 04:44:20.951268 systemd[1]: Created slice kubepods-burstable-pod19aab396_44fe_4774_8e8b_8e78779ca391.slice - libcontainer container kubepods-burstable-pod19aab396_44fe_4774_8e8b_8e78779ca391.slice. Jul 11 04:44:21.023876 kubelet[2675]: I0711 04:44:21.023830 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19aab396-44fe-4774-8e8b-8e78779ca391-clustermesh-secrets\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.023876 kubelet[2675]: I0711 04:44:21.023881 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-kernel\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.024044 kubelet[2675]: I0711 04:44:21.023910 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-lib-modules\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.024044 kubelet[2675]: I0711 04:44:21.023924 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-hubble-tls\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.024044 kubelet[2675]: I0711 04:44:21.023967 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-xtables-lock\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.024044 kubelet[2675]: I0711 04:44:21.023982 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h49g\" (UniqueName: \"kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-kube-api-access-2h49g\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.024044 kubelet[2675]: I0711 04:44:21.024014 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-config-path\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.024044 kubelet[2675]: I0711 04:44:21.024031 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-net\") pod \"cilium-m8fnt\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " pod="kube-system/cilium-m8fnt" Jul 11 04:44:21.033787 kubelet[2675]: E0711 04:44:21.033754 2675 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 11 04:44:21.033787 kubelet[2675]: E0711 04:44:21.033784 2675 projected.go:194] Error preparing data for projected volume kube-api-access-r9jb4 for pod kube-system/kube-proxy-sxg78: configmap "kube-root-ca.crt" not found Jul 11 04:44:21.033931 kubelet[2675]: E0711 04:44:21.033853 2675 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a82fb3-1252-4fcd-b87d-8bf586fcb481-kube-api-access-r9jb4 podName:e1a82fb3-1252-4fcd-b87d-8bf586fcb481 nodeName:}" failed. No retries permitted until 2025-07-11 04:44:21.533822341 +0000 UTC m=+5.601838344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9jb4" (UniqueName: "kubernetes.io/projected/e1a82fb3-1252-4fcd-b87d-8bf586fcb481-kube-api-access-r9jb4") pod "kube-proxy-sxg78" (UID: "e1a82fb3-1252-4fcd-b87d-8bf586fcb481") : configmap "kube-root-ca.crt" not found Jul 11 04:44:21.256624 kubelet[2675]: E0711 04:44:21.255434 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.257099 containerd[1557]: time="2025-07-11T04:44:21.255987793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8fnt,Uid:19aab396-44fe-4774-8e8b-8e78779ca391,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:21.275833 systemd[1]: Created slice kubepods-besteffort-pod5777187a_576e_497e_bae2_254ba08866e7.slice - libcontainer container kubepods-besteffort-pod5777187a_576e_497e_bae2_254ba08866e7.slice. Jul 11 04:44:21.314634 containerd[1557]: time="2025-07-11T04:44:21.314361173Z" level=info msg="connecting to shim 928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648" address="unix:///run/containerd/s/9852b876c72219dfe2d74e681894b9e9294d82666780e68e4e90452b6c45f08f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:21.337530 systemd[1]: Started cri-containerd-928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648.scope - libcontainer container 928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648. Jul 11 04:44:21.358115 containerd[1557]: time="2025-07-11T04:44:21.358076161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m8fnt,Uid:19aab396-44fe-4774-8e8b-8e78779ca391,Namespace:kube-system,Attempt:0,} returns sandbox id \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\"" Jul 11 04:44:21.358958 kubelet[2675]: E0711 04:44:21.358931 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.360286 containerd[1557]: time="2025-07-11T04:44:21.360257306Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 04:44:21.426295 kubelet[2675]: I0711 04:44:21.426255 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5777187a-576e-497e-bae2-254ba08866e7-cilium-config-path\") pod \"cilium-operator-5d85765b45-5dhh4\" (UID: \"5777187a-576e-497e-bae2-254ba08866e7\") " pod="kube-system/cilium-operator-5d85765b45-5dhh4" Jul 11 04:44:21.426295 kubelet[2675]: I0711 04:44:21.426301 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx88m\" (UniqueName: \"kubernetes.io/projected/5777187a-576e-497e-bae2-254ba08866e7-kube-api-access-zx88m\") pod \"cilium-operator-5d85765b45-5dhh4\" (UID: \"5777187a-576e-497e-bae2-254ba08866e7\") " pod="kube-system/cilium-operator-5d85765b45-5dhh4" Jul 11 04:44:21.578852 kubelet[2675]: E0711 04:44:21.578820 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.579268 containerd[1557]: time="2025-07-11T04:44:21.579206844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5dhh4,Uid:5777187a-576e-497e-bae2-254ba08866e7,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:21.597827 containerd[1557]: time="2025-07-11T04:44:21.597791499Z" level=info msg="connecting to shim 32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378" address="unix:///run/containerd/s/80987bdb186493410ca3070134e6b545ab50e1bdf0824e5778eea2a3e6887013" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:21.619268 kubelet[2675]: E0711 04:44:21.618966 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.619547 systemd[1]: Started cri-containerd-32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378.scope - libcontainer container 32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378. Jul 11 04:44:21.660900 containerd[1557]: time="2025-07-11T04:44:21.660857154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5dhh4,Uid:5777187a-576e-497e-bae2-254ba08866e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\"" Jul 11 04:44:21.661558 kubelet[2675]: E0711 04:44:21.661537 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.845925 kubelet[2675]: E0711 04:44:21.845696 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.847547 containerd[1557]: time="2025-07-11T04:44:21.847511374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxg78,Uid:e1a82fb3-1252-4fcd-b87d-8bf586fcb481,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:21.863886 containerd[1557]: time="2025-07-11T04:44:21.863830207Z" level=info msg="connecting to shim 9dc7a712bd98b39a17a58d5a98932edd184a1d49710271896909092768545a5d" address="unix:///run/containerd/s/f4a7f2f73ce71129769538ebf267a89c98fe9a70f4f7fb98dfed13ba6b8e6598" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:21.884504 systemd[1]: Started cri-containerd-9dc7a712bd98b39a17a58d5a98932edd184a1d49710271896909092768545a5d.scope - libcontainer container 9dc7a712bd98b39a17a58d5a98932edd184a1d49710271896909092768545a5d. Jul 11 04:44:21.905482 containerd[1557]: time="2025-07-11T04:44:21.905445885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxg78,Uid:e1a82fb3-1252-4fcd-b87d-8bf586fcb481,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dc7a712bd98b39a17a58d5a98932edd184a1d49710271896909092768545a5d\"" Jul 11 04:44:21.906416 kubelet[2675]: E0711 04:44:21.906391 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:21.909016 containerd[1557]: time="2025-07-11T04:44:21.908912667Z" level=info msg="CreateContainer within sandbox \"9dc7a712bd98b39a17a58d5a98932edd184a1d49710271896909092768545a5d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 04:44:21.917214 containerd[1557]: time="2025-07-11T04:44:21.917175128Z" level=info msg="Container 8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:21.923934 containerd[1557]: time="2025-07-11T04:44:21.923899283Z" level=info msg="CreateContainer within sandbox \"9dc7a712bd98b39a17a58d5a98932edd184a1d49710271896909092768545a5d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca\"" Jul 11 04:44:21.924481 containerd[1557]: time="2025-07-11T04:44:21.924436356Z" level=info msg="StartContainer for \"8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca\"" Jul 11 04:44:21.925939 containerd[1557]: time="2025-07-11T04:44:21.925903471Z" level=info msg="connecting to shim 8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca" address="unix:///run/containerd/s/f4a7f2f73ce71129769538ebf267a89c98fe9a70f4f7fb98dfed13ba6b8e6598" protocol=ttrpc version=3 Jul 11 04:44:21.944479 systemd[1]: Started cri-containerd-8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca.scope - libcontainer container 8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca. Jul 11 04:44:21.977171 containerd[1557]: time="2025-07-11T04:44:21.977124672Z" level=info msg="StartContainer for \"8007ee1e227dfb998fd0241a3f16e7fdc7176d2879170e3a068d966ef52e54ca\" returns successfully" Jul 11 04:44:22.034376 kubelet[2675]: E0711 04:44:22.034336 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:22.039628 kubelet[2675]: E0711 04:44:22.039595 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:22.061493 kubelet[2675]: I0711 04:44:22.061422 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sxg78" podStartSLOduration=2.061403082 podStartE2EDuration="2.061403082s" podCreationTimestamp="2025-07-11 04:44:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:44:22.049649506 +0000 UTC m=+6.117665629" watchObservedRunningTime="2025-07-11 04:44:22.061403082 +0000 UTC m=+6.129419125" Jul 11 04:44:22.600288 kubelet[2675]: E0711 04:44:22.600238 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:23.041415 kubelet[2675]: E0711 04:44:23.041391 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:24.791860 kubelet[2675]: E0711 04:44:24.791829 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:25.044731 kubelet[2675]: E0711 04:44:25.044623 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:28.486357 update_engine[1516]: I20250711 04:44:28.485886 1516 update_attempter.cc:509] Updating boot flags... Jul 11 04:44:31.807358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310044813.mount: Deactivated successfully. Jul 11 04:44:34.658032 containerd[1557]: time="2025-07-11T04:44:34.657287877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:34.658032 containerd[1557]: time="2025-07-11T04:44:34.657997450Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 11 04:44:34.658903 containerd[1557]: time="2025-07-11T04:44:34.658853330Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:34.660088 containerd[1557]: time="2025-07-11T04:44:34.660057276Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.299765436s" Jul 11 04:44:34.660141 containerd[1557]: time="2025-07-11T04:44:34.660089362Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 04:44:34.668684 containerd[1557]: time="2025-07-11T04:44:34.668651646Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 04:44:34.694419 containerd[1557]: time="2025-07-11T04:44:34.694376666Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 04:44:34.700948 containerd[1557]: time="2025-07-11T04:44:34.700907049Z" level=info msg="Container 0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:34.705631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555378034.mount: Deactivated successfully. Jul 11 04:44:34.711227 containerd[1557]: time="2025-07-11T04:44:34.711174413Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\"" Jul 11 04:44:34.714275 containerd[1557]: time="2025-07-11T04:44:34.714018825Z" level=info msg="StartContainer for \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\"" Jul 11 04:44:34.714893 containerd[1557]: time="2025-07-11T04:44:34.714845940Z" level=info msg="connecting to shim 0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f" address="unix:///run/containerd/s/9852b876c72219dfe2d74e681894b9e9294d82666780e68e4e90452b6c45f08f" protocol=ttrpc version=3 Jul 11 04:44:34.765462 systemd[1]: Started cri-containerd-0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f.scope - libcontainer container 0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f. Jul 11 04:44:34.810903 containerd[1557]: time="2025-07-11T04:44:34.810865609Z" level=info msg="StartContainer for \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" returns successfully" Jul 11 04:44:34.878420 systemd[1]: cri-containerd-0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f.scope: Deactivated successfully. Jul 11 04:44:34.878873 systemd[1]: cri-containerd-0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f.scope: Consumed 68ms CPU time, 5.4M memory peak, 3.1M written to disk. Jul 11 04:44:34.899294 containerd[1557]: time="2025-07-11T04:44:34.899128985Z" level=info msg="received exit event container_id:\"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" id:\"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" pid:3115 exited_at:{seconds:1752209074 nanos:893480167}" Jul 11 04:44:34.899294 containerd[1557]: time="2025-07-11T04:44:34.899256489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" id:\"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" pid:3115 exited_at:{seconds:1752209074 nanos:893480167}" Jul 11 04:44:34.937792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f-rootfs.mount: Deactivated successfully. Jul 11 04:44:35.139836 kubelet[2675]: E0711 04:44:35.138732 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:35.927775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576739671.mount: Deactivated successfully. Jul 11 04:44:36.137447 kubelet[2675]: E0711 04:44:36.137401 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:36.141175 containerd[1557]: time="2025-07-11T04:44:36.141124986Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 04:44:36.156535 containerd[1557]: time="2025-07-11T04:44:36.156475714Z" level=info msg="Container 81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:36.159537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832150641.mount: Deactivated successfully. Jul 11 04:44:36.163206 containerd[1557]: time="2025-07-11T04:44:36.163099165Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\"" Jul 11 04:44:36.164324 containerd[1557]: time="2025-07-11T04:44:36.164277679Z" level=info msg="StartContainer for \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\"" Jul 11 04:44:36.165485 containerd[1557]: time="2025-07-11T04:44:36.165293446Z" level=info msg="connecting to shim 81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd" address="unix:///run/containerd/s/9852b876c72219dfe2d74e681894b9e9294d82666780e68e4e90452b6c45f08f" protocol=ttrpc version=3 Jul 11 04:44:36.190563 systemd[1]: Started cri-containerd-81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd.scope - libcontainer container 81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd. Jul 11 04:44:36.225392 containerd[1557]: time="2025-07-11T04:44:36.225298847Z" level=info msg="StartContainer for \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" returns successfully" Jul 11 04:44:36.240641 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 04:44:36.240862 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 04:44:36.241051 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 11 04:44:36.242996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 04:44:36.246456 systemd[1]: cri-containerd-81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd.scope: Deactivated successfully. Jul 11 04:44:36.248977 containerd[1557]: time="2025-07-11T04:44:36.247576915Z" level=info msg="received exit event container_id:\"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" id:\"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" pid:3172 exited_at:{seconds:1752209076 nanos:247078793}" Jul 11 04:44:36.248977 containerd[1557]: time="2025-07-11T04:44:36.247757145Z" level=info msg="TaskExit event in podsandbox handler container_id:\"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" id:\"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" pid:3172 exited_at:{seconds:1752209076 nanos:247078793}" Jul 11 04:44:36.288586 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 04:44:36.615019 containerd[1557]: time="2025-07-11T04:44:36.614967411Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 11 04:44:36.617058 containerd[1557]: time="2025-07-11T04:44:36.617012788Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.948325616s" Jul 11 04:44:36.617058 containerd[1557]: time="2025-07-11T04:44:36.617053595Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 04:44:36.620777 containerd[1557]: time="2025-07-11T04:44:36.620720799Z" level=info msg="CreateContainer within sandbox \"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 04:44:36.627136 containerd[1557]: time="2025-07-11T04:44:36.626924700Z" level=info msg="Container 6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:36.628650 containerd[1557]: time="2025-07-11T04:44:36.628613778Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:36.629457 containerd[1557]: time="2025-07-11T04:44:36.629406749Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 04:44:36.632770 containerd[1557]: time="2025-07-11T04:44:36.632734697Z" level=info msg="CreateContainer within sandbox \"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\"" Jul 11 04:44:36.633360 containerd[1557]: time="2025-07-11T04:44:36.633336036Z" level=info msg="StartContainer for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\"" Jul 11 04:44:36.634391 containerd[1557]: time="2025-07-11T04:44:36.634358884Z" level=info msg="connecting to shim 6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd" address="unix:///run/containerd/s/80987bdb186493410ca3070134e6b545ab50e1bdf0824e5778eea2a3e6887013" protocol=ttrpc version=3 Jul 11 04:44:36.654549 systemd[1]: Started cri-containerd-6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd.scope - libcontainer container 6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd. Jul 11 04:44:36.682086 containerd[1557]: time="2025-07-11T04:44:36.682040336Z" level=info msg="StartContainer for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" returns successfully" Jul 11 04:44:36.925638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd-rootfs.mount: Deactivated successfully. Jul 11 04:44:37.151235 kubelet[2675]: E0711 04:44:37.151201 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:37.157868 kubelet[2675]: E0711 04:44:37.157667 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:37.159500 containerd[1557]: time="2025-07-11T04:44:37.159424117Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 04:44:37.175348 containerd[1557]: time="2025-07-11T04:44:37.174469280Z" level=info msg="Container 9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:37.188400 kubelet[2675]: I0711 04:44:37.188160 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5dhh4" podStartSLOduration=1.2328934440000001 podStartE2EDuration="16.18814043s" podCreationTimestamp="2025-07-11 04:44:21 +0000 UTC" firstStartedPulling="2025-07-11 04:44:21.662416589 +0000 UTC m=+5.730432632" lastFinishedPulling="2025-07-11 04:44:36.617663575 +0000 UTC m=+20.685679618" observedRunningTime="2025-07-11 04:44:37.162120694 +0000 UTC m=+21.230136737" watchObservedRunningTime="2025-07-11 04:44:37.18814043 +0000 UTC m=+21.256156473" Jul 11 04:44:37.197608 containerd[1557]: time="2025-07-11T04:44:37.197484833Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\"" Jul 11 04:44:37.198020 containerd[1557]: time="2025-07-11T04:44:37.197979629Z" level=info msg="StartContainer for \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\"" Jul 11 04:44:37.200893 containerd[1557]: time="2025-07-11T04:44:37.200849592Z" level=info msg="connecting to shim 9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700" address="unix:///run/containerd/s/9852b876c72219dfe2d74e681894b9e9294d82666780e68e4e90452b6c45f08f" protocol=ttrpc version=3 Jul 11 04:44:37.230270 systemd[1]: Started cri-containerd-9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700.scope - libcontainer container 9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700. Jul 11 04:44:37.317540 containerd[1557]: time="2025-07-11T04:44:37.317487438Z" level=info msg="StartContainer for \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" returns successfully" Jul 11 04:44:37.325872 systemd[1]: cri-containerd-9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700.scope: Deactivated successfully. Jul 11 04:44:37.330937 containerd[1557]: time="2025-07-11T04:44:37.330897828Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" id:\"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" pid:3264 exited_at:{seconds:1752209077 nanos:330135711}" Jul 11 04:44:37.331065 containerd[1557]: time="2025-07-11T04:44:37.330964919Z" level=info msg="received exit event container_id:\"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" id:\"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" pid:3264 exited_at:{seconds:1752209077 nanos:330135711}" Jul 11 04:44:38.162075 kubelet[2675]: E0711 04:44:38.162046 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:38.163645 kubelet[2675]: E0711 04:44:38.162104 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:38.167805 containerd[1557]: time="2025-07-11T04:44:38.166703882Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 04:44:38.176924 containerd[1557]: time="2025-07-11T04:44:38.176199736Z" level=info msg="Container c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:38.185793 containerd[1557]: time="2025-07-11T04:44:38.185607537Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\"" Jul 11 04:44:38.186454 containerd[1557]: time="2025-07-11T04:44:38.186415694Z" level=info msg="StartContainer for \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\"" Jul 11 04:44:38.187662 containerd[1557]: time="2025-07-11T04:44:38.187583303Z" level=info msg="connecting to shim c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df" address="unix:///run/containerd/s/9852b876c72219dfe2d74e681894b9e9294d82666780e68e4e90452b6c45f08f" protocol=ttrpc version=3 Jul 11 04:44:38.206992 systemd[1]: Started cri-containerd-c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df.scope - libcontainer container c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df. Jul 11 04:44:38.231203 systemd[1]: cri-containerd-c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df.scope: Deactivated successfully. Jul 11 04:44:38.231970 containerd[1557]: time="2025-07-11T04:44:38.231259705Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" id:\"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" pid:3302 exited_at:{seconds:1752209078 nanos:231039073}" Jul 11 04:44:38.232620 containerd[1557]: time="2025-07-11T04:44:38.232507965Z" level=info msg="received exit event container_id:\"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" id:\"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" pid:3302 exited_at:{seconds:1752209078 nanos:231039073}" Jul 11 04:44:38.234444 containerd[1557]: time="2025-07-11T04:44:38.234418842Z" level=info msg="StartContainer for \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" returns successfully" Jul 11 04:44:38.249981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df-rootfs.mount: Deactivated successfully. Jul 11 04:44:39.167059 kubelet[2675]: E0711 04:44:39.167019 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:39.169654 containerd[1557]: time="2025-07-11T04:44:39.169615748Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 04:44:39.203336 containerd[1557]: time="2025-07-11T04:44:39.201878166Z" level=info msg="Container e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:39.208446 containerd[1557]: time="2025-07-11T04:44:39.208417733Z" level=info msg="CreateContainer within sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\"" Jul 11 04:44:39.209007 containerd[1557]: time="2025-07-11T04:44:39.208983730Z" level=info msg="StartContainer for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\"" Jul 11 04:44:39.209883 containerd[1557]: time="2025-07-11T04:44:39.209862609Z" level=info msg="connecting to shim e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c" address="unix:///run/containerd/s/9852b876c72219dfe2d74e681894b9e9294d82666780e68e4e90452b6c45f08f" protocol=ttrpc version=3 Jul 11 04:44:39.234469 systemd[1]: Started cri-containerd-e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c.scope - libcontainer container e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c. Jul 11 04:44:39.259298 containerd[1557]: time="2025-07-11T04:44:39.259255631Z" level=info msg="StartContainer for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" returns successfully" Jul 11 04:44:39.358849 containerd[1557]: time="2025-07-11T04:44:39.358811979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" id:\"cb7f37c3396db9448f22ef34aee39eb30324d20d7cbcfed7b67dd50e2ea8b0c7\" pid:3369 exited_at:{seconds:1752209079 nanos:358519779}" Jul 11 04:44:39.400214 kubelet[2675]: I0711 04:44:39.400177 2675 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 04:44:39.427667 systemd[1]: Created slice kubepods-burstable-pod2bf6d324_07fd_4ced_8107_015bc9683f70.slice - libcontainer container kubepods-burstable-pod2bf6d324_07fd_4ced_8107_015bc9683f70.slice. Jul 11 04:44:39.435673 systemd[1]: Created slice kubepods-burstable-pod40fea0e1_0689_42f4_8023_4c4487ad5a8c.slice - libcontainer container kubepods-burstable-pod40fea0e1_0689_42f4_8023_4c4487ad5a8c.slice. Jul 11 04:44:39.553514 kubelet[2675]: I0711 04:44:39.553479 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqhgg\" (UniqueName: \"kubernetes.io/projected/40fea0e1-0689-42f4-8023-4c4487ad5a8c-kube-api-access-nqhgg\") pod \"coredns-7c65d6cfc9-dsh8b\" (UID: \"40fea0e1-0689-42f4-8023-4c4487ad5a8c\") " pod="kube-system/coredns-7c65d6cfc9-dsh8b" Jul 11 04:44:39.553724 kubelet[2675]: I0711 04:44:39.553670 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40fea0e1-0689-42f4-8023-4c4487ad5a8c-config-volume\") pod \"coredns-7c65d6cfc9-dsh8b\" (UID: \"40fea0e1-0689-42f4-8023-4c4487ad5a8c\") " pod="kube-system/coredns-7c65d6cfc9-dsh8b" Jul 11 04:44:39.553724 kubelet[2675]: I0711 04:44:39.553698 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bf6d324-07fd-4ced-8107-015bc9683f70-config-volume\") pod \"coredns-7c65d6cfc9-kw4vv\" (UID: \"2bf6d324-07fd-4ced-8107-015bc9683f70\") " pod="kube-system/coredns-7c65d6cfc9-kw4vv" Jul 11 04:44:39.553911 kubelet[2675]: I0711 04:44:39.553714 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqf7s\" (UniqueName: \"kubernetes.io/projected/2bf6d324-07fd-4ced-8107-015bc9683f70-kube-api-access-cqf7s\") pod \"coredns-7c65d6cfc9-kw4vv\" (UID: \"2bf6d324-07fd-4ced-8107-015bc9683f70\") " pod="kube-system/coredns-7c65d6cfc9-kw4vv" Jul 11 04:44:39.733411 kubelet[2675]: E0711 04:44:39.733326 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:39.734889 containerd[1557]: time="2025-07-11T04:44:39.734140665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kw4vv,Uid:2bf6d324-07fd-4ced-8107-015bc9683f70,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:39.738066 kubelet[2675]: E0711 04:44:39.738047 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:39.738875 containerd[1557]: time="2025-07-11T04:44:39.738838662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dsh8b,Uid:40fea0e1-0689-42f4-8023-4c4487ad5a8c,Namespace:kube-system,Attempt:0,}" Jul 11 04:44:40.173876 kubelet[2675]: E0711 04:44:40.173787 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:40.189409 kubelet[2675]: I0711 04:44:40.188018 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m8fnt" podStartSLOduration=6.879465229 podStartE2EDuration="20.188001592s" podCreationTimestamp="2025-07-11 04:44:20 +0000 UTC" firstStartedPulling="2025-07-11 04:44:21.359787782 +0000 UTC m=+5.427803825" lastFinishedPulling="2025-07-11 04:44:34.668324185 +0000 UTC m=+18.736340188" observedRunningTime="2025-07-11 04:44:40.186974822 +0000 UTC m=+24.254990865" watchObservedRunningTime="2025-07-11 04:44:40.188001592 +0000 UTC m=+24.256017635" Jul 11 04:44:41.175937 kubelet[2675]: E0711 04:44:41.175860 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:41.423382 systemd-networkd[1438]: cilium_host: Link UP Jul 11 04:44:41.423504 systemd-networkd[1438]: cilium_net: Link UP Jul 11 04:44:41.423835 systemd-networkd[1438]: cilium_net: Gained carrier Jul 11 04:44:41.424988 systemd-networkd[1438]: cilium_host: Gained carrier Jul 11 04:44:41.509194 systemd-networkd[1438]: cilium_vxlan: Link UP Jul 11 04:44:41.509200 systemd-networkd[1438]: cilium_vxlan: Gained carrier Jul 11 04:44:41.790596 systemd-networkd[1438]: cilium_net: Gained IPv6LL Jul 11 04:44:41.815552 kernel: NET: Registered PF_ALG protocol family Jul 11 04:44:42.118915 systemd-networkd[1438]: cilium_host: Gained IPv6LL Jul 11 04:44:42.179408 kubelet[2675]: E0711 04:44:42.179362 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:42.378420 systemd-networkd[1438]: lxc_health: Link UP Jul 11 04:44:42.378866 systemd-networkd[1438]: lxc_health: Gained carrier Jul 11 04:44:42.834271 systemd-networkd[1438]: lxc3a5610476f46: Link UP Jul 11 04:44:42.835799 kernel: eth0: renamed from tmp0d711 Jul 11 04:44:42.840447 systemd-networkd[1438]: lxc7aaac5d59606: Link UP Jul 11 04:44:42.850344 kernel: eth0: renamed from tmp9f401 Jul 11 04:44:42.850651 systemd-networkd[1438]: lxc3a5610476f46: Gained carrier Jul 11 04:44:42.852469 systemd-networkd[1438]: lxc7aaac5d59606: Gained carrier Jul 11 04:44:43.143499 systemd-networkd[1438]: cilium_vxlan: Gained IPv6LL Jul 11 04:44:43.274979 kubelet[2675]: E0711 04:44:43.274923 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:43.492587 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:47964.service - OpenSSH per-connection server daemon (10.0.0.1:47964). Jul 11 04:44:43.546189 sshd[3847]: Accepted publickey for core from 10.0.0.1 port 47964 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:44:43.547514 sshd-session[3847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:44:43.551507 systemd-logind[1514]: New session 8 of user core. Jul 11 04:44:43.561463 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 04:44:43.654569 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jul 11 04:44:43.690825 sshd[3850]: Connection closed by 10.0.0.1 port 47964 Jul 11 04:44:43.691363 sshd-session[3847]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:43.694903 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:47964.service: Deactivated successfully. Jul 11 04:44:43.696728 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 04:44:43.698808 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Jul 11 04:44:43.700016 systemd-logind[1514]: Removed session 8. Jul 11 04:44:43.974596 systemd-networkd[1438]: lxc7aaac5d59606: Gained IPv6LL Jul 11 04:44:44.189418 kubelet[2675]: E0711 04:44:44.189202 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:44.807859 systemd-networkd[1438]: lxc3a5610476f46: Gained IPv6LL Jul 11 04:44:46.316997 containerd[1557]: time="2025-07-11T04:44:46.316910403Z" level=info msg="connecting to shim 9f4013b05a3ceb7f25f72b843d47d22333fe9101b8321e54a347b541781813a7" address="unix:///run/containerd/s/dc73493f51f72f3facc1a0e480603e3ffa692efd803c3d0452e231dbf0167f4d" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:46.319022 containerd[1557]: time="2025-07-11T04:44:46.318988822Z" level=info msg="connecting to shim 0d71140d8b422f6bdbb7969a3cc468cfe2c3a72930ce0b7a021da2903829146b" address="unix:///run/containerd/s/b2c859f5627a2235b516b8717bae499573baaaa51fa577c0612c1bcb15a52713" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:44:46.344461 systemd[1]: Started cri-containerd-0d71140d8b422f6bdbb7969a3cc468cfe2c3a72930ce0b7a021da2903829146b.scope - libcontainer container 0d71140d8b422f6bdbb7969a3cc468cfe2c3a72930ce0b7a021da2903829146b. Jul 11 04:44:46.347872 systemd[1]: Started cri-containerd-9f4013b05a3ceb7f25f72b843d47d22333fe9101b8321e54a347b541781813a7.scope - libcontainer container 9f4013b05a3ceb7f25f72b843d47d22333fe9101b8321e54a347b541781813a7. Jul 11 04:44:46.358204 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 04:44:46.362171 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 04:44:46.384344 containerd[1557]: time="2025-07-11T04:44:46.384237978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kw4vv,Uid:2bf6d324-07fd-4ced-8107-015bc9683f70,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d71140d8b422f6bdbb7969a3cc468cfe2c3a72930ce0b7a021da2903829146b\"" Jul 11 04:44:46.385208 kubelet[2675]: E0711 04:44:46.385188 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:46.387773 containerd[1557]: time="2025-07-11T04:44:46.387742760Z" level=info msg="CreateContainer within sandbox \"0d71140d8b422f6bdbb7969a3cc468cfe2c3a72930ce0b7a021da2903829146b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 04:44:46.391916 containerd[1557]: time="2025-07-11T04:44:46.391887078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dsh8b,Uid:40fea0e1-0689-42f4-8023-4c4487ad5a8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f4013b05a3ceb7f25f72b843d47d22333fe9101b8321e54a347b541781813a7\"" Jul 11 04:44:46.392898 kubelet[2675]: E0711 04:44:46.392857 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:46.397553 containerd[1557]: time="2025-07-11T04:44:46.397502963Z" level=info msg="CreateContainer within sandbox \"9f4013b05a3ceb7f25f72b843d47d22333fe9101b8321e54a347b541781813a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 04:44:46.410776 containerd[1557]: time="2025-07-11T04:44:46.410249544Z" level=info msg="Container a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:46.417477 containerd[1557]: time="2025-07-11T04:44:46.417378640Z" level=info msg="Container 9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:44:46.424359 containerd[1557]: time="2025-07-11T04:44:46.424300198Z" level=info msg="CreateContainer within sandbox \"9f4013b05a3ceb7f25f72b843d47d22333fe9101b8321e54a347b541781813a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a\"" Jul 11 04:44:46.425620 containerd[1557]: time="2025-07-11T04:44:46.425590909Z" level=info msg="CreateContainer within sandbox \"0d71140d8b422f6bdbb7969a3cc468cfe2c3a72930ce0b7a021da2903829146b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e\"" Jul 11 04:44:46.425868 containerd[1557]: time="2025-07-11T04:44:46.425825049Z" level=info msg="StartContainer for \"a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a\"" Jul 11 04:44:46.426059 containerd[1557]: time="2025-07-11T04:44:46.426036028Z" level=info msg="StartContainer for \"9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e\"" Jul 11 04:44:46.426785 containerd[1557]: time="2025-07-11T04:44:46.426744169Z" level=info msg="connecting to shim 9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e" address="unix:///run/containerd/s/b2c859f5627a2235b516b8717bae499573baaaa51fa577c0612c1bcb15a52713" protocol=ttrpc version=3 Jul 11 04:44:46.427788 containerd[1557]: time="2025-07-11T04:44:46.427731534Z" level=info msg="connecting to shim a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a" address="unix:///run/containerd/s/dc73493f51f72f3facc1a0e480603e3ffa692efd803c3d0452e231dbf0167f4d" protocol=ttrpc version=3 Jul 11 04:44:46.449482 systemd[1]: Started cri-containerd-9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e.scope - libcontainer container 9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e. Jul 11 04:44:46.452285 systemd[1]: Started cri-containerd-a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a.scope - libcontainer container a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a. Jul 11 04:44:46.482363 containerd[1557]: time="2025-07-11T04:44:46.481433612Z" level=info msg="StartContainer for \"9415431593c4a07f58b5b06d95060337e7ff2be960f5992e53d7ffa8cf07ce6e\" returns successfully" Jul 11 04:44:46.493362 containerd[1557]: time="2025-07-11T04:44:46.493145224Z" level=info msg="StartContainer for \"a35ad89fe222a4fd80ba608f711f8d0a50c4a4309bbfb8c195de05139d47d71a\" returns successfully" Jul 11 04:44:47.196264 kubelet[2675]: E0711 04:44:47.196225 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:47.201553 kubelet[2675]: E0711 04:44:47.201345 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:47.210692 kubelet[2675]: I0711 04:44:47.210616 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dsh8b" podStartSLOduration=26.21060035 podStartE2EDuration="26.21060035s" podCreationTimestamp="2025-07-11 04:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:44:47.210572068 +0000 UTC m=+31.278588111" watchObservedRunningTime="2025-07-11 04:44:47.21060035 +0000 UTC m=+31.278616393" Jul 11 04:44:48.203281 kubelet[2675]: E0711 04:44:48.203190 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:48.203281 kubelet[2675]: E0711 04:44:48.203236 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:48.710224 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:47972.service - OpenSSH per-connection server daemon (10.0.0.1:47972). Jul 11 04:44:48.763193 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 47972 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:44:48.764575 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:44:48.768399 systemd-logind[1514]: New session 9 of user core. Jul 11 04:44:48.782505 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 04:44:48.897964 sshd[4048]: Connection closed by 10.0.0.1 port 47972 Jul 11 04:44:48.899029 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:48.904102 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:47972.service: Deactivated successfully. Jul 11 04:44:48.906151 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 04:44:48.907131 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Jul 11 04:44:48.908396 systemd-logind[1514]: Removed session 9. Jul 11 04:44:49.205306 kubelet[2675]: E0711 04:44:49.205252 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:49.205306 kubelet[2675]: E0711 04:44:49.205306 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:44:53.912112 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:34060.service - OpenSSH per-connection server daemon (10.0.0.1:34060). Jul 11 04:44:53.963455 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 34060 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:44:53.964556 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:44:53.968725 systemd-logind[1514]: New session 10 of user core. Jul 11 04:44:53.975496 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 04:44:54.081247 sshd[4067]: Connection closed by 10.0.0.1 port 34060 Jul 11 04:44:54.081550 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:54.085459 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:34060.service: Deactivated successfully. Jul 11 04:44:54.087086 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 04:44:54.088851 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Jul 11 04:44:54.090129 systemd-logind[1514]: Removed session 10. Jul 11 04:44:59.096918 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:34072.service - OpenSSH per-connection server daemon (10.0.0.1:34072). Jul 11 04:44:59.158546 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 34072 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:44:59.159777 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:44:59.163942 systemd-logind[1514]: New session 11 of user core. Jul 11 04:44:59.174526 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 04:44:59.300461 sshd[4085]: Connection closed by 10.0.0.1 port 34072 Jul 11 04:44:59.300379 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:59.312603 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:34072.service: Deactivated successfully. Jul 11 04:44:59.315145 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 04:44:59.316674 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Jul 11 04:44:59.319487 systemd-logind[1514]: Removed session 11. Jul 11 04:44:59.321589 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:34084.service - OpenSSH per-connection server daemon (10.0.0.1:34084). Jul 11 04:44:59.375230 sshd[4100]: Accepted publickey for core from 10.0.0.1 port 34084 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:44:59.376451 sshd-session[4100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:44:59.380827 systemd-logind[1514]: New session 12 of user core. Jul 11 04:44:59.392482 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 04:44:59.551573 sshd[4103]: Connection closed by 10.0.0.1 port 34084 Jul 11 04:44:59.551006 sshd-session[4100]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:59.574757 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:34084.service: Deactivated successfully. Jul 11 04:44:59.581106 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 04:44:59.582823 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Jul 11 04:44:59.587551 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:34092.service - OpenSSH per-connection server daemon (10.0.0.1:34092). Jul 11 04:44:59.588100 systemd-logind[1514]: Removed session 12. Jul 11 04:44:59.637244 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 34092 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:44:59.639250 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:44:59.644912 systemd-logind[1514]: New session 13 of user core. Jul 11 04:44:59.660574 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 04:44:59.787869 sshd[4119]: Connection closed by 10.0.0.1 port 34092 Jul 11 04:44:59.788196 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Jul 11 04:44:59.792115 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:34092.service: Deactivated successfully. Jul 11 04:44:59.793731 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 04:44:59.794578 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Jul 11 04:44:59.795811 systemd-logind[1514]: Removed session 13. Jul 11 04:45:04.797972 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:33562.service - OpenSSH per-connection server daemon (10.0.0.1:33562). Jul 11 04:45:04.858844 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 33562 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:04.859642 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:04.863964 systemd-logind[1514]: New session 14 of user core. Jul 11 04:45:04.873477 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 04:45:04.982985 sshd[4135]: Connection closed by 10.0.0.1 port 33562 Jul 11 04:45:04.983294 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:04.986644 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:33562.service: Deactivated successfully. Jul 11 04:45:04.988376 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 04:45:04.989125 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Jul 11 04:45:04.990290 systemd-logind[1514]: Removed session 14. Jul 11 04:45:10.006925 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:33570.service - OpenSSH per-connection server daemon (10.0.0.1:33570). Jul 11 04:45:10.057383 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 33570 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:10.058543 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:10.062073 systemd-logind[1514]: New session 15 of user core. Jul 11 04:45:10.072505 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 04:45:10.183603 sshd[4151]: Connection closed by 10.0.0.1 port 33570 Jul 11 04:45:10.184020 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:10.194524 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:33570.service: Deactivated successfully. Jul 11 04:45:10.196018 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 04:45:10.196791 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Jul 11 04:45:10.199021 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:33578.service - OpenSSH per-connection server daemon (10.0.0.1:33578). Jul 11 04:45:10.199816 systemd-logind[1514]: Removed session 15. Jul 11 04:45:10.252973 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 33578 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:10.254045 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:10.258382 systemd-logind[1514]: New session 16 of user core. Jul 11 04:45:10.272449 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 04:45:10.501015 sshd[4167]: Connection closed by 10.0.0.1 port 33578 Jul 11 04:45:10.501572 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:10.511567 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:33578.service: Deactivated successfully. Jul 11 04:45:10.512998 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 04:45:10.513669 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Jul 11 04:45:10.516941 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:33582.service - OpenSSH per-connection server daemon (10.0.0.1:33582). Jul 11 04:45:10.517535 systemd-logind[1514]: Removed session 16. Jul 11 04:45:10.568377 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 33582 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:10.569396 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:10.573693 systemd-logind[1514]: New session 17 of user core. Jul 11 04:45:10.581524 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 04:45:11.912908 sshd[4182]: Connection closed by 10.0.0.1 port 33582 Jul 11 04:45:11.913213 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:11.925994 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:33582.service: Deactivated successfully. Jul 11 04:45:11.930161 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 04:45:11.931219 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Jul 11 04:45:11.934248 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:33598.service - OpenSSH per-connection server daemon (10.0.0.1:33598). Jul 11 04:45:11.935818 systemd-logind[1514]: Removed session 17. Jul 11 04:45:11.987362 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 33598 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:11.988374 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:11.992035 systemd-logind[1514]: New session 18 of user core. Jul 11 04:45:11.998454 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 04:45:12.220198 sshd[4207]: Connection closed by 10.0.0.1 port 33598 Jul 11 04:45:12.220881 sshd-session[4204]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:12.229547 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:33598.service: Deactivated successfully. Jul 11 04:45:12.231229 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 04:45:12.232395 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Jul 11 04:45:12.235598 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:33610.service - OpenSSH per-connection server daemon (10.0.0.1:33610). Jul 11 04:45:12.237054 systemd-logind[1514]: Removed session 18. Jul 11 04:45:12.294545 sshd[4218]: Accepted publickey for core from 10.0.0.1 port 33610 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:12.295756 sshd-session[4218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:12.300210 systemd-logind[1514]: New session 19 of user core. Jul 11 04:45:12.316476 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 04:45:12.429343 sshd[4221]: Connection closed by 10.0.0.1 port 33610 Jul 11 04:45:12.428240 sshd-session[4218]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:12.431660 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Jul 11 04:45:12.431807 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:33610.service: Deactivated successfully. Jul 11 04:45:12.433749 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 04:45:12.435372 systemd-logind[1514]: Removed session 19. Jul 11 04:45:17.443787 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:57620.service - OpenSSH per-connection server daemon (10.0.0.1:57620). Jul 11 04:45:17.507605 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 57620 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:17.509375 sshd-session[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:17.513374 systemd-logind[1514]: New session 20 of user core. Jul 11 04:45:17.519513 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 04:45:17.628647 sshd[4244]: Connection closed by 10.0.0.1 port 57620 Jul 11 04:45:17.628961 sshd-session[4238]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:17.632253 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:57620.service: Deactivated successfully. Jul 11 04:45:17.635730 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 04:45:17.636397 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Jul 11 04:45:17.637293 systemd-logind[1514]: Removed session 20. Jul 11 04:45:22.649416 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:36358.service - OpenSSH per-connection server daemon (10.0.0.1:36358). Jul 11 04:45:22.713109 sshd[4260]: Accepted publickey for core from 10.0.0.1 port 36358 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:22.714245 sshd-session[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:22.718507 systemd-logind[1514]: New session 21 of user core. Jul 11 04:45:22.733475 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 04:45:22.839621 sshd[4263]: Connection closed by 10.0.0.1 port 36358 Jul 11 04:45:22.839940 sshd-session[4260]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:22.843330 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:36358.service: Deactivated successfully. Jul 11 04:45:22.846132 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 04:45:22.847268 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Jul 11 04:45:22.849251 systemd-logind[1514]: Removed session 21. Jul 11 04:45:24.005383 kubelet[2675]: E0711 04:45:24.005267 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:26.006093 kubelet[2675]: E0711 04:45:26.006040 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:27.856514 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:36370.service - OpenSSH per-connection server daemon (10.0.0.1:36370). Jul 11 04:45:27.913351 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 36370 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:27.914641 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:27.918999 systemd-logind[1514]: New session 22 of user core. Jul 11 04:45:27.930473 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 04:45:28.048280 sshd[4279]: Connection closed by 10.0.0.1 port 36370 Jul 11 04:45:28.049641 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:28.057765 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:36370.service: Deactivated successfully. Jul 11 04:45:28.059530 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 04:45:28.060374 systemd-logind[1514]: Session 22 logged out. Waiting for processes to exit. Jul 11 04:45:28.063809 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:36384.service - OpenSSH per-connection server daemon (10.0.0.1:36384). Jul 11 04:45:28.064424 systemd-logind[1514]: Removed session 22. Jul 11 04:45:28.114047 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 36384 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:28.115100 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:28.118987 systemd-logind[1514]: New session 23 of user core. Jul 11 04:45:28.129495 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 04:45:29.795320 kubelet[2675]: I0711 04:45:29.795241 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kw4vv" podStartSLOduration=68.795214465 podStartE2EDuration="1m8.795214465s" podCreationTimestamp="2025-07-11 04:44:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:44:47.234864635 +0000 UTC m=+31.302880758" watchObservedRunningTime="2025-07-11 04:45:29.795214465 +0000 UTC m=+73.863230508" Jul 11 04:45:29.804820 containerd[1557]: time="2025-07-11T04:45:29.804552770Z" level=info msg="StopContainer for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" with timeout 30 (s)" Jul 11 04:45:29.805450 containerd[1557]: time="2025-07-11T04:45:29.805338343Z" level=info msg="Stop container \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" with signal terminated" Jul 11 04:45:29.816598 systemd[1]: cri-containerd-6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd.scope: Deactivated successfully. Jul 11 04:45:29.818565 containerd[1557]: time="2025-07-11T04:45:29.818528068Z" level=info msg="received exit event container_id:\"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" id:\"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" pid:3229 exited_at:{seconds:1752209129 nanos:817735856}" Jul 11 04:45:29.818652 containerd[1557]: time="2025-07-11T04:45:29.818602869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" id:\"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" pid:3229 exited_at:{seconds:1752209129 nanos:817735856}" Jul 11 04:45:29.837567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd-rootfs.mount: Deactivated successfully. Jul 11 04:45:29.839005 containerd[1557]: time="2025-07-11T04:45:29.838946146Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 04:45:29.841986 containerd[1557]: time="2025-07-11T04:45:29.841906992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" id:\"be0954d12a0d3972ed85476a782db7135e4cb71c169c3cc76a42d3625f7ebca8\" pid:4326 exited_at:{seconds:1752209129 nanos:841662948}" Jul 11 04:45:29.843759 containerd[1557]: time="2025-07-11T04:45:29.843735941Z" level=info msg="StopContainer for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" with timeout 2 (s)" Jul 11 04:45:29.844195 containerd[1557]: time="2025-07-11T04:45:29.844176108Z" level=info msg="Stop container \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" with signal terminated" Jul 11 04:45:29.848951 containerd[1557]: time="2025-07-11T04:45:29.848915341Z" level=info msg="StopContainer for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" returns successfully" Jul 11 04:45:29.850952 systemd-networkd[1438]: lxc_health: Link DOWN Jul 11 04:45:29.850964 systemd-networkd[1438]: lxc_health: Lost carrier Jul 11 04:45:29.857657 containerd[1557]: time="2025-07-11T04:45:29.857607997Z" level=info msg="StopPodSandbox for \"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\"" Jul 11 04:45:29.868089 systemd[1]: cri-containerd-e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c.scope: Deactivated successfully. Jul 11 04:45:29.869281 systemd[1]: cri-containerd-e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c.scope: Consumed 6.303s CPU time, 122M memory peak, 120K read from disk, 12.9M written to disk. Jul 11 04:45:29.870500 containerd[1557]: time="2025-07-11T04:45:29.870456237Z" level=info msg="received exit event container_id:\"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" id:\"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" pid:3339 exited_at:{seconds:1752209129 nanos:869124176}" Jul 11 04:45:29.870573 containerd[1557]: time="2025-07-11T04:45:29.870528558Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" id:\"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" pid:3339 exited_at:{seconds:1752209129 nanos:869124176}" Jul 11 04:45:29.874492 containerd[1557]: time="2025-07-11T04:45:29.874450979Z" level=info msg="Container to stop \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 04:45:29.880453 systemd[1]: cri-containerd-32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378.scope: Deactivated successfully. Jul 11 04:45:29.881474 containerd[1557]: time="2025-07-11T04:45:29.881434808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" id:\"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" pid:2834 exit_status:137 exited_at:{seconds:1752209129 nanos:881158004}" Jul 11 04:45:29.891062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c-rootfs.mount: Deactivated successfully. Jul 11 04:45:29.908682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378-rootfs.mount: Deactivated successfully. Jul 11 04:45:29.912253 containerd[1557]: time="2025-07-11T04:45:29.912209087Z" level=info msg="StopContainer for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" returns successfully" Jul 11 04:45:29.912722 containerd[1557]: time="2025-07-11T04:45:29.912690335Z" level=info msg="StopPodSandbox for \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\"" Jul 11 04:45:29.912770 containerd[1557]: time="2025-07-11T04:45:29.912760736Z" level=info msg="Container to stop \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 04:45:29.912794 containerd[1557]: time="2025-07-11T04:45:29.912772176Z" level=info msg="Container to stop \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 04:45:29.912794 containerd[1557]: time="2025-07-11T04:45:29.912780856Z" level=info msg="Container to stop \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 04:45:29.912794 containerd[1557]: time="2025-07-11T04:45:29.912788856Z" level=info msg="Container to stop \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 04:45:29.912877 containerd[1557]: time="2025-07-11T04:45:29.912797616Z" level=info msg="Container to stop \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 04:45:29.913380 containerd[1557]: time="2025-07-11T04:45:29.913332105Z" level=info msg="shim disconnected" id=32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378 namespace=k8s.io Jul 11 04:45:29.913461 containerd[1557]: time="2025-07-11T04:45:29.913376305Z" level=warning msg="cleaning up after shim disconnected" id=32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378 namespace=k8s.io Jul 11 04:45:29.913461 containerd[1557]: time="2025-07-11T04:45:29.913458227Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 04:45:29.918812 systemd[1]: cri-containerd-928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648.scope: Deactivated successfully. Jul 11 04:45:29.931099 containerd[1557]: time="2025-07-11T04:45:29.930868738Z" level=info msg="TaskExit event in podsandbox handler container_id:\"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" id:\"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" pid:2789 exit_status:137 exited_at:{seconds:1752209129 nanos:920748260}" Jul 11 04:45:29.932676 containerd[1557]: time="2025-07-11T04:45:29.931471427Z" level=info msg="TearDown network for sandbox \"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" successfully" Jul 11 04:45:29.932676 containerd[1557]: time="2025-07-11T04:45:29.931497068Z" level=info msg="StopPodSandbox for \"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" returns successfully" Jul 11 04:45:29.933080 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378-shm.mount: Deactivated successfully. Jul 11 04:45:29.939396 containerd[1557]: time="2025-07-11T04:45:29.938709860Z" level=info msg="received exit event sandbox_id:\"32bfb4410dd0ad9766e7fecafb537d93ab1f0162f96bb6e64c8921cfb11c7378\" exit_status:137 exited_at:{seconds:1752209129 nanos:881158004}" Jul 11 04:45:29.940173 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648-rootfs.mount: Deactivated successfully. Jul 11 04:45:29.946640 containerd[1557]: time="2025-07-11T04:45:29.946603343Z" level=info msg="received exit event sandbox_id:\"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" exit_status:137 exited_at:{seconds:1752209129 nanos:920748260}" Jul 11 04:45:29.946954 containerd[1557]: time="2025-07-11T04:45:29.946930468Z" level=info msg="shim disconnected" id=928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648 namespace=k8s.io Jul 11 04:45:29.947063 containerd[1557]: time="2025-07-11T04:45:29.946956269Z" level=warning msg="cleaning up after shim disconnected" id=928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648 namespace=k8s.io Jul 11 04:45:29.947063 containerd[1557]: time="2025-07-11T04:45:29.947058790Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 04:45:29.947175 containerd[1557]: time="2025-07-11T04:45:29.947138631Z" level=info msg="TearDown network for sandbox \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" successfully" Jul 11 04:45:29.947175 containerd[1557]: time="2025-07-11T04:45:29.947165312Z" level=info msg="StopPodSandbox for \"928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648\" returns successfully" Jul 11 04:45:30.005366 kubelet[2675]: E0711 04:45:30.005085 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:30.042502 kubelet[2675]: I0711 04:45:30.042453 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5777187a-576e-497e-bae2-254ba08866e7-cilium-config-path\") pod \"5777187a-576e-497e-bae2-254ba08866e7\" (UID: \"5777187a-576e-497e-bae2-254ba08866e7\") " Jul 11 04:45:30.042502 kubelet[2675]: I0711 04:45:30.042504 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx88m\" (UniqueName: \"kubernetes.io/projected/5777187a-576e-497e-bae2-254ba08866e7-kube-api-access-zx88m\") pod \"5777187a-576e-497e-bae2-254ba08866e7\" (UID: \"5777187a-576e-497e-bae2-254ba08866e7\") " Jul 11 04:45:30.047383 kubelet[2675]: I0711 04:45:30.047271 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5777187a-576e-497e-bae2-254ba08866e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5777187a-576e-497e-bae2-254ba08866e7" (UID: "5777187a-576e-497e-bae2-254ba08866e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 04:45:30.049435 kubelet[2675]: I0711 04:45:30.049388 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5777187a-576e-497e-bae2-254ba08866e7-kube-api-access-zx88m" (OuterVolumeSpecName: "kube-api-access-zx88m") pod "5777187a-576e-497e-bae2-254ba08866e7" (UID: "5777187a-576e-497e-bae2-254ba08866e7"). InnerVolumeSpecName "kube-api-access-zx88m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 04:45:30.142877 kubelet[2675]: I0711 04:45:30.142837 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-hubble-tls\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.142877 kubelet[2675]: I0711 04:45:30.142877 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-hostproc\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143014 kubelet[2675]: I0711 04:45:30.142898 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cni-path\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143014 kubelet[2675]: I0711 04:45:30.142917 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-config-path\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143014 kubelet[2675]: I0711 04:45:30.142939 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-run\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143014 kubelet[2675]: I0711 04:45:30.142957 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-etc-cni-netd\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143014 kubelet[2675]: I0711 04:45:30.142972 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-net\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143014 kubelet[2675]: I0711 04:45:30.142973 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cni-path" (OuterVolumeSpecName: "cni-path") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.143145 kubelet[2675]: I0711 04:45:30.142986 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-lib-modules\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143145 kubelet[2675]: I0711 04:45:30.143044 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h49g\" (UniqueName: \"kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-kube-api-access-2h49g\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143145 kubelet[2675]: I0711 04:45:30.143067 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-bpf-maps\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143145 kubelet[2675]: I0711 04:45:30.143086 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19aab396-44fe-4774-8e8b-8e78779ca391-clustermesh-secrets\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143145 kubelet[2675]: I0711 04:45:30.143101 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-kernel\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143145 kubelet[2675]: I0711 04:45:30.143117 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-xtables-lock\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143294 kubelet[2675]: I0711 04:45:30.143133 2675 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-cgroup\") pod \"19aab396-44fe-4774-8e8b-8e78779ca391\" (UID: \"19aab396-44fe-4774-8e8b-8e78779ca391\") " Jul 11 04:45:30.143294 kubelet[2675]: I0711 04:45:30.143176 2675 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.143294 kubelet[2675]: I0711 04:45:30.143187 2675 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5777187a-576e-497e-bae2-254ba08866e7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.143294 kubelet[2675]: I0711 04:45:30.143196 2675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx88m\" (UniqueName: \"kubernetes.io/projected/5777187a-576e-497e-bae2-254ba08866e7-kube-api-access-zx88m\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.143294 kubelet[2675]: I0711 04:45:30.143011 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.143294 kubelet[2675]: I0711 04:45:30.143215 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.143475 kubelet[2675]: I0711 04:45:30.143226 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-hostproc" (OuterVolumeSpecName: "hostproc") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.144980 kubelet[2675]: I0711 04:45:30.144806 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 04:45:30.144980 kubelet[2675]: I0711 04:45:30.144876 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.144980 kubelet[2675]: I0711 04:45:30.144892 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.144980 kubelet[2675]: I0711 04:45:30.144905 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.144980 kubelet[2675]: I0711 04:45:30.144919 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.145140 kubelet[2675]: I0711 04:45:30.144933 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.145140 kubelet[2675]: I0711 04:45:30.144946 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 11 04:45:30.145513 kubelet[2675]: I0711 04:45:30.145177 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 04:45:30.145687 kubelet[2675]: I0711 04:45:30.145663 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-kube-api-access-2h49g" (OuterVolumeSpecName: "kube-api-access-2h49g") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "kube-api-access-2h49g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 04:45:30.148287 kubelet[2675]: I0711 04:45:30.148246 2675 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19aab396-44fe-4774-8e8b-8e78779ca391-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "19aab396-44fe-4774-8e8b-8e78779ca391" (UID: "19aab396-44fe-4774-8e8b-8e78779ca391"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 04:45:30.244096 kubelet[2675]: I0711 04:45:30.244044 2675 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/19aab396-44fe-4774-8e8b-8e78779ca391-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244096 kubelet[2675]: I0711 04:45:30.244076 2675 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244096 kubelet[2675]: I0711 04:45:30.244089 2675 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244096 kubelet[2675]: I0711 04:45:30.244098 2675 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h49g\" (UniqueName: \"kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-kube-api-access-2h49g\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244096 kubelet[2675]: I0711 04:45:30.244107 2675 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244137 2675 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244145 2675 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244153 2675 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/19aab396-44fe-4774-8e8b-8e78779ca391-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244161 2675 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244169 2675 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244176 2675 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244184 2675 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.244406 kubelet[2675]: I0711 04:45:30.244192 2675 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/19aab396-44fe-4774-8e8b-8e78779ca391-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 11 04:45:30.281951 kubelet[2675]: I0711 04:45:30.281924 2675 scope.go:117] "RemoveContainer" containerID="6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd" Jul 11 04:45:30.284696 containerd[1557]: time="2025-07-11T04:45:30.284604553Z" level=info msg="RemoveContainer for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\"" Jul 11 04:45:30.286741 systemd[1]: Removed slice kubepods-besteffort-pod5777187a_576e_497e_bae2_254ba08866e7.slice - libcontainer container kubepods-besteffort-pod5777187a_576e_497e_bae2_254ba08866e7.slice. Jul 11 04:45:30.290961 containerd[1557]: time="2025-07-11T04:45:30.289298185Z" level=info msg="RemoveContainer for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" returns successfully" Jul 11 04:45:30.291034 kubelet[2675]: I0711 04:45:30.290602 2675 scope.go:117] "RemoveContainer" containerID="6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd" Jul 11 04:45:30.291074 containerd[1557]: time="2025-07-11T04:45:30.291012251Z" level=error msg="ContainerStatus for \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\": not found" Jul 11 04:45:30.291184 kubelet[2675]: E0711 04:45:30.291156 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\": not found" containerID="6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd" Jul 11 04:45:30.291274 kubelet[2675]: I0711 04:45:30.291190 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd"} err="failed to get container status \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\": rpc error: code = NotFound desc = an error occurred when try to find container \"6263250092bced9566d934424771bf4fcbf5c07b826f7f85c5910e7c0998dbbd\": not found" Jul 11 04:45:30.291304 kubelet[2675]: I0711 04:45:30.291277 2675 scope.go:117] "RemoveContainer" containerID="e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c" Jul 11 04:45:30.292759 containerd[1557]: time="2025-07-11T04:45:30.292660236Z" level=info msg="RemoveContainer for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\"" Jul 11 04:45:30.296620 containerd[1557]: time="2025-07-11T04:45:30.296573616Z" level=info msg="RemoveContainer for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" returns successfully" Jul 11 04:45:30.297705 kubelet[2675]: I0711 04:45:30.296727 2675 scope.go:117] "RemoveContainer" containerID="c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df" Jul 11 04:45:30.298328 systemd[1]: Removed slice kubepods-burstable-pod19aab396_44fe_4774_8e8b_8e78779ca391.slice - libcontainer container kubepods-burstable-pod19aab396_44fe_4774_8e8b_8e78779ca391.slice. Jul 11 04:45:30.298432 systemd[1]: kubepods-burstable-pod19aab396_44fe_4774_8e8b_8e78779ca391.slice: Consumed 6.454s CPU time, 122.3M memory peak, 124K read from disk, 16.1M written to disk. Jul 11 04:45:30.301333 containerd[1557]: time="2025-07-11T04:45:30.301285368Z" level=info msg="RemoveContainer for \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\"" Jul 11 04:45:30.305493 containerd[1557]: time="2025-07-11T04:45:30.305350390Z" level=info msg="RemoveContainer for \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" returns successfully" Jul 11 04:45:30.305883 kubelet[2675]: I0711 04:45:30.305692 2675 scope.go:117] "RemoveContainer" containerID="9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700" Jul 11 04:45:30.309736 containerd[1557]: time="2025-07-11T04:45:30.308675080Z" level=info msg="RemoveContainer for \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\"" Jul 11 04:45:30.318445 containerd[1557]: time="2025-07-11T04:45:30.318401028Z" level=info msg="RemoveContainer for \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" returns successfully" Jul 11 04:45:30.318668 kubelet[2675]: I0711 04:45:30.318634 2675 scope.go:117] "RemoveContainer" containerID="81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd" Jul 11 04:45:30.320095 containerd[1557]: time="2025-07-11T04:45:30.320071894Z" level=info msg="RemoveContainer for \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\"" Jul 11 04:45:30.322729 containerd[1557]: time="2025-07-11T04:45:30.322702774Z" level=info msg="RemoveContainer for \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" returns successfully" Jul 11 04:45:30.322857 kubelet[2675]: I0711 04:45:30.322836 2675 scope.go:117] "RemoveContainer" containerID="0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f" Jul 11 04:45:30.324172 containerd[1557]: time="2025-07-11T04:45:30.324145796Z" level=info msg="RemoveContainer for \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\"" Jul 11 04:45:30.326486 containerd[1557]: time="2025-07-11T04:45:30.326462631Z" level=info msg="RemoveContainer for \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" returns successfully" Jul 11 04:45:30.326640 kubelet[2675]: I0711 04:45:30.326610 2675 scope.go:117] "RemoveContainer" containerID="e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c" Jul 11 04:45:30.326914 containerd[1557]: time="2025-07-11T04:45:30.326874078Z" level=error msg="ContainerStatus for \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\": not found" Jul 11 04:45:30.327072 kubelet[2675]: E0711 04:45:30.327046 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\": not found" containerID="e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c" Jul 11 04:45:30.327114 kubelet[2675]: I0711 04:45:30.327075 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c"} err="failed to get container status \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e29e59b78434611e9abafb76adb2477316edc8fa1c9eb6fd834e039bb1bd2f0c\": not found" Jul 11 04:45:30.327114 kubelet[2675]: I0711 04:45:30.327095 2675 scope.go:117] "RemoveContainer" containerID="c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df" Jul 11 04:45:30.327248 containerd[1557]: time="2025-07-11T04:45:30.327221483Z" level=error msg="ContainerStatus for \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\": not found" Jul 11 04:45:30.327380 kubelet[2675]: E0711 04:45:30.327350 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\": not found" containerID="c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df" Jul 11 04:45:30.327380 kubelet[2675]: I0711 04:45:30.327366 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df"} err="failed to get container status \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\": rpc error: code = NotFound desc = an error occurred when try to find container \"c3956f6feaede1a3e941cc6a20060922b8452ac4409f43c0fdcc719670b107df\": not found" Jul 11 04:45:30.327380 kubelet[2675]: I0711 04:45:30.327379 2675 scope.go:117] "RemoveContainer" containerID="9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700" Jul 11 04:45:30.327538 containerd[1557]: time="2025-07-11T04:45:30.327491767Z" level=error msg="ContainerStatus for \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\": not found" Jul 11 04:45:30.327605 kubelet[2675]: E0711 04:45:30.327580 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\": not found" containerID="9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700" Jul 11 04:45:30.327605 kubelet[2675]: I0711 04:45:30.327598 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700"} err="failed to get container status \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\": rpc error: code = NotFound desc = an error occurred when try to find container \"9edb120fd2b6923419e59e33eb3f03bdb32d1da70647c3830a92b6139ec12700\": not found" Jul 11 04:45:30.327711 kubelet[2675]: I0711 04:45:30.327609 2675 scope.go:117] "RemoveContainer" containerID="81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd" Jul 11 04:45:30.327914 containerd[1557]: time="2025-07-11T04:45:30.327876053Z" level=error msg="ContainerStatus for \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\": not found" Jul 11 04:45:30.328097 kubelet[2675]: E0711 04:45:30.328067 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\": not found" containerID="81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd" Jul 11 04:45:30.328137 kubelet[2675]: I0711 04:45:30.328093 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd"} err="failed to get container status \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\": rpc error: code = NotFound desc = an error occurred when try to find container \"81b0197102b64c2c5efa284ac960979e653240b067f8c2ccb260f9144536bedd\": not found" Jul 11 04:45:30.328137 kubelet[2675]: I0711 04:45:30.328116 2675 scope.go:117] "RemoveContainer" containerID="0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f" Jul 11 04:45:30.328327 containerd[1557]: time="2025-07-11T04:45:30.328278619Z" level=error msg="ContainerStatus for \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\": not found" Jul 11 04:45:30.328452 kubelet[2675]: E0711 04:45:30.328434 2675 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\": not found" containerID="0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f" Jul 11 04:45:30.328507 kubelet[2675]: I0711 04:45:30.328453 2675 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f"} err="failed to get container status \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0dab091a1c99e29185b7848b58af7251bf289ec20b879b4cfe5892f3c4e2ff6f\": not found" Jul 11 04:45:30.837483 systemd[1]: var-lib-kubelet-pods-5777187a\x2d576e\x2d497e\x2dbae2\x2d254ba08866e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzx88m.mount: Deactivated successfully. Jul 11 04:45:30.837588 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-928bf807e821a99c3dfd3badfd782695f8cef51f788744b07b39d642717b7648-shm.mount: Deactivated successfully. Jul 11 04:45:30.837639 systemd[1]: var-lib-kubelet-pods-19aab396\x2d44fe\x2d4774\x2d8e8b\x2d8e78779ca391-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2h49g.mount: Deactivated successfully. Jul 11 04:45:30.837696 systemd[1]: var-lib-kubelet-pods-19aab396\x2d44fe\x2d4774\x2d8e8b\x2d8e78779ca391-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 04:45:30.837746 systemd[1]: var-lib-kubelet-pods-19aab396\x2d44fe\x2d4774\x2d8e8b\x2d8e78779ca391-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 04:45:31.055204 kubelet[2675]: E0711 04:45:31.055160 2675 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 04:45:31.770152 sshd[4296]: Connection closed by 10.0.0.1 port 36384 Jul 11 04:45:31.771496 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:31.780911 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:36384.service: Deactivated successfully. Jul 11 04:45:31.782696 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 04:45:31.783029 systemd[1]: session-23.scope: Consumed 1.012s CPU time, 24.8M memory peak. Jul 11 04:45:31.783671 systemd-logind[1514]: Session 23 logged out. Waiting for processes to exit. Jul 11 04:45:31.786725 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:36388.service - OpenSSH per-connection server daemon (10.0.0.1:36388). Jul 11 04:45:31.787613 systemd-logind[1514]: Removed session 23. Jul 11 04:45:31.836807 sshd[4450]: Accepted publickey for core from 10.0.0.1 port 36388 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:31.837920 sshd-session[4450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:31.842525 systemd-logind[1514]: New session 24 of user core. Jul 11 04:45:31.852564 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 11 04:45:31.900399 containerd[1557]: time="2025-07-11T04:45:31.900348371Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1752209129 nanos:881158004}" Jul 11 04:45:32.007679 kubelet[2675]: I0711 04:45:32.007628 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" path="/var/lib/kubelet/pods/19aab396-44fe-4774-8e8b-8e78779ca391/volumes" Jul 11 04:45:32.008166 kubelet[2675]: I0711 04:45:32.008133 2675 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5777187a-576e-497e-bae2-254ba08866e7" path="/var/lib/kubelet/pods/5777187a-576e-497e-bae2-254ba08866e7/volumes" Jul 11 04:45:33.044519 sshd[4453]: Connection closed by 10.0.0.1 port 36388 Jul 11 04:45:33.044427 sshd-session[4450]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:33.054280 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:36388.service: Deactivated successfully. Jul 11 04:45:33.058255 systemd[1]: session-24.scope: Deactivated successfully. Jul 11 04:45:33.058523 systemd[1]: session-24.scope: Consumed 1.119s CPU time, 26.1M memory peak. Jul 11 04:45:33.059065 systemd-logind[1514]: Session 24 logged out. Waiting for processes to exit. Jul 11 04:45:33.063831 kubelet[2675]: E0711 04:45:33.063581 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" containerName="mount-cgroup" Jul 11 04:45:33.063831 kubelet[2675]: E0711 04:45:33.063612 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" containerName="mount-bpf-fs" Jul 11 04:45:33.063831 kubelet[2675]: E0711 04:45:33.063620 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" containerName="clean-cilium-state" Jul 11 04:45:33.063831 kubelet[2675]: E0711 04:45:33.063626 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" containerName="cilium-agent" Jul 11 04:45:33.063831 kubelet[2675]: E0711 04:45:33.063631 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" containerName="apply-sysctl-overwrites" Jul 11 04:45:33.063831 kubelet[2675]: E0711 04:45:33.063636 2675 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5777187a-576e-497e-bae2-254ba08866e7" containerName="cilium-operator" Jul 11 04:45:33.064181 kubelet[2675]: I0711 04:45:33.063945 2675 memory_manager.go:354] "RemoveStaleState removing state" podUID="5777187a-576e-497e-bae2-254ba08866e7" containerName="cilium-operator" Jul 11 04:45:33.064181 kubelet[2675]: I0711 04:45:33.063962 2675 memory_manager.go:354] "RemoveStaleState removing state" podUID="19aab396-44fe-4774-8e8b-8e78779ca391" containerName="cilium-agent" Jul 11 04:45:33.066574 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:44494.service - OpenSSH per-connection server daemon (10.0.0.1:44494). Jul 11 04:45:33.068677 systemd-logind[1514]: Removed session 24. Jul 11 04:45:33.094034 systemd[1]: Created slice kubepods-burstable-pode430a022_1bdc_4eaf_812a_695534b4e022.slice - libcontainer container kubepods-burstable-pode430a022_1bdc_4eaf_812a_695534b4e022.slice. Jul 11 04:45:33.126652 sshd[4466]: Accepted publickey for core from 10.0.0.1 port 44494 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:33.128007 sshd-session[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:33.131944 systemd-logind[1514]: New session 25 of user core. Jul 11 04:45:33.138456 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 11 04:45:33.160441 kubelet[2675]: I0711 04:45:33.160284 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-etc-cni-netd\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160441 kubelet[2675]: I0711 04:45:33.160408 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e430a022-1bdc-4eaf-812a-695534b4e022-cilium-ipsec-secrets\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160553 kubelet[2675]: I0711 04:45:33.160454 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpzn\" (UniqueName: \"kubernetes.io/projected/e430a022-1bdc-4eaf-812a-695534b4e022-kube-api-access-hwpzn\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160553 kubelet[2675]: I0711 04:45:33.160478 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-host-proc-sys-net\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160553 kubelet[2675]: I0711 04:45:33.160495 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-host-proc-sys-kernel\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160553 kubelet[2675]: I0711 04:45:33.160513 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-cilium-run\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160553 kubelet[2675]: I0711 04:45:33.160545 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e430a022-1bdc-4eaf-812a-695534b4e022-cilium-config-path\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160678 kubelet[2675]: I0711 04:45:33.160565 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e430a022-1bdc-4eaf-812a-695534b4e022-hubble-tls\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160678 kubelet[2675]: I0711 04:45:33.160596 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-cilium-cgroup\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160678 kubelet[2675]: I0711 04:45:33.160613 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e430a022-1bdc-4eaf-812a-695534b4e022-clustermesh-secrets\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160678 kubelet[2675]: I0711 04:45:33.160640 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-cni-path\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160678 kubelet[2675]: I0711 04:45:33.160659 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-lib-modules\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160781 kubelet[2675]: I0711 04:45:33.160692 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-bpf-maps\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160781 kubelet[2675]: I0711 04:45:33.160713 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-hostproc\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.160781 kubelet[2675]: I0711 04:45:33.160736 2675 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e430a022-1bdc-4eaf-812a-695534b4e022-xtables-lock\") pod \"cilium-29m8q\" (UID: \"e430a022-1bdc-4eaf-812a-695534b4e022\") " pod="kube-system/cilium-29m8q" Jul 11 04:45:33.187354 sshd[4469]: Connection closed by 10.0.0.1 port 44494 Jul 11 04:45:33.188421 sshd-session[4466]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:33.202518 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:44494.service: Deactivated successfully. Jul 11 04:45:33.204798 systemd[1]: session-25.scope: Deactivated successfully. Jul 11 04:45:33.205435 systemd-logind[1514]: Session 25 logged out. Waiting for processes to exit. Jul 11 04:45:33.208016 systemd-logind[1514]: Removed session 25. Jul 11 04:45:33.209696 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:44496.service - OpenSSH per-connection server daemon (10.0.0.1:44496). Jul 11 04:45:33.259911 sshd[4476]: Accepted publickey for core from 10.0.0.1 port 44496 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 04:45:33.261096 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 04:45:33.277397 systemd-logind[1514]: New session 26 of user core. Jul 11 04:45:33.288457 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 11 04:45:33.399059 kubelet[2675]: E0711 04:45:33.398948 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:33.399923 containerd[1557]: time="2025-07-11T04:45:33.399760822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29m8q,Uid:e430a022-1bdc-4eaf-812a-695534b4e022,Namespace:kube-system,Attempt:0,}" Jul 11 04:45:33.416142 containerd[1557]: time="2025-07-11T04:45:33.416105376Z" level=info msg="connecting to shim e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd" address="unix:///run/containerd/s/4377ef751cebc0e2c035eb8323983942544e310fd8ef05d85780d36a26b5c002" namespace=k8s.io protocol=ttrpc version=3 Jul 11 04:45:33.438503 systemd[1]: Started cri-containerd-e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd.scope - libcontainer container e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd. Jul 11 04:45:33.459287 containerd[1557]: time="2025-07-11T04:45:33.459250673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-29m8q,Uid:e430a022-1bdc-4eaf-812a-695534b4e022,Namespace:kube-system,Attempt:0,} returns sandbox id \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\"" Jul 11 04:45:33.460333 kubelet[2675]: E0711 04:45:33.460219 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:33.462766 containerd[1557]: time="2025-07-11T04:45:33.462735683Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 04:45:33.468767 containerd[1557]: time="2025-07-11T04:45:33.468729448Z" level=info msg="Container 3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:45:33.482063 containerd[1557]: time="2025-07-11T04:45:33.482001678Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\"" Jul 11 04:45:33.483959 containerd[1557]: time="2025-07-11T04:45:33.482938892Z" level=info msg="StartContainer for \"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\"" Jul 11 04:45:33.483959 containerd[1557]: time="2025-07-11T04:45:33.483705303Z" level=info msg="connecting to shim 3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1" address="unix:///run/containerd/s/4377ef751cebc0e2c035eb8323983942544e310fd8ef05d85780d36a26b5c002" protocol=ttrpc version=3 Jul 11 04:45:33.508552 systemd[1]: Started cri-containerd-3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1.scope - libcontainer container 3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1. Jul 11 04:45:33.531426 containerd[1557]: time="2025-07-11T04:45:33.531392785Z" level=info msg="StartContainer for \"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\" returns successfully" Jul 11 04:45:33.543591 systemd[1]: cri-containerd-3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1.scope: Deactivated successfully. Jul 11 04:45:33.545088 containerd[1557]: time="2025-07-11T04:45:33.544973979Z" level=info msg="received exit event container_id:\"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\" id:\"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\" pid:4548 exited_at:{seconds:1752209133 nanos:544524812}" Jul 11 04:45:33.545958 containerd[1557]: time="2025-07-11T04:45:33.545274063Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\" id:\"3ce58e38a850eacb4dbca1f87bd6de71d2d1bb412222ff64002d023df806ebe1\" pid:4548 exited_at:{seconds:1752209133 nanos:544524812}" Jul 11 04:45:34.300277 kubelet[2675]: E0711 04:45:34.300154 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:34.306466 containerd[1557]: time="2025-07-11T04:45:34.306419301Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 04:45:34.314730 containerd[1557]: time="2025-07-11T04:45:34.314682337Z" level=info msg="Container 8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:45:34.320309 containerd[1557]: time="2025-07-11T04:45:34.320267415Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\"" Jul 11 04:45:34.320863 containerd[1557]: time="2025-07-11T04:45:34.320830503Z" level=info msg="StartContainer for \"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\"" Jul 11 04:45:34.321601 containerd[1557]: time="2025-07-11T04:45:34.321579433Z" level=info msg="connecting to shim 8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2" address="unix:///run/containerd/s/4377ef751cebc0e2c035eb8323983942544e310fd8ef05d85780d36a26b5c002" protocol=ttrpc version=3 Jul 11 04:45:34.344477 systemd[1]: Started cri-containerd-8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2.scope - libcontainer container 8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2. Jul 11 04:45:34.365798 containerd[1557]: time="2025-07-11T04:45:34.365754332Z" level=info msg="StartContainer for \"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\" returns successfully" Jul 11 04:45:34.372192 systemd[1]: cri-containerd-8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2.scope: Deactivated successfully. Jul 11 04:45:34.374355 containerd[1557]: time="2025-07-11T04:45:34.374309132Z" level=info msg="received exit event container_id:\"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\" id:\"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\" pid:4596 exited_at:{seconds:1752209134 nanos:374116929}" Jul 11 04:45:34.374542 containerd[1557]: time="2025-07-11T04:45:34.374400693Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\" id:\"8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2\" pid:4596 exited_at:{seconds:1752209134 nanos:374116929}" Jul 11 04:45:34.390139 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8879c947453541c65c58c0afa30f604c2a55c88519051602ba380a57fcd842d2-rootfs.mount: Deactivated successfully. Jul 11 04:45:35.304841 kubelet[2675]: E0711 04:45:35.304678 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:35.307190 containerd[1557]: time="2025-07-11T04:45:35.307148354Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 04:45:35.325919 containerd[1557]: time="2025-07-11T04:45:35.325882851Z" level=info msg="Container f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:45:35.331815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2632543107.mount: Deactivated successfully. Jul 11 04:45:35.336257 containerd[1557]: time="2025-07-11T04:45:35.336226833Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\"" Jul 11 04:45:35.336706 containerd[1557]: time="2025-07-11T04:45:35.336684919Z" level=info msg="StartContainer for \"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\"" Jul 11 04:45:35.339092 containerd[1557]: time="2025-07-11T04:45:35.339039072Z" level=info msg="connecting to shim f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a" address="unix:///run/containerd/s/4377ef751cebc0e2c035eb8323983942544e310fd8ef05d85780d36a26b5c002" protocol=ttrpc version=3 Jul 11 04:45:35.361504 systemd[1]: Started cri-containerd-f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a.scope - libcontainer container f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a. Jul 11 04:45:35.393619 containerd[1557]: time="2025-07-11T04:45:35.393581580Z" level=info msg="StartContainer for \"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\" returns successfully" Jul 11 04:45:35.394148 systemd[1]: cri-containerd-f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a.scope: Deactivated successfully. Jul 11 04:45:35.398321 containerd[1557]: time="2025-07-11T04:45:35.398280405Z" level=info msg="received exit event container_id:\"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\" id:\"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\" pid:4642 exited_at:{seconds:1752209135 nanos:397426993}" Jul 11 04:45:35.398379 containerd[1557]: time="2025-07-11T04:45:35.398354966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\" id:\"f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a\" pid:4642 exited_at:{seconds:1752209135 nanos:397426993}" Jul 11 04:45:35.426543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f589d5d8fc9285d209fe0db5900a7e6e97203cabb4b68ff24f61e499969c5d1a-rootfs.mount: Deactivated successfully. Jul 11 04:45:36.056669 kubelet[2675]: E0711 04:45:36.056625 2675 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 04:45:36.309963 kubelet[2675]: E0711 04:45:36.309652 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:36.312588 containerd[1557]: time="2025-07-11T04:45:36.312552827Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 04:45:36.320344 containerd[1557]: time="2025-07-11T04:45:36.320090009Z" level=info msg="Container 3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:45:36.329539 containerd[1557]: time="2025-07-11T04:45:36.329507615Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\"" Jul 11 04:45:36.330270 containerd[1557]: time="2025-07-11T04:45:36.330244785Z" level=info msg="StartContainer for \"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\"" Jul 11 04:45:36.331390 containerd[1557]: time="2025-07-11T04:45:36.331131717Z" level=info msg="connecting to shim 3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59" address="unix:///run/containerd/s/4377ef751cebc0e2c035eb8323983942544e310fd8ef05d85780d36a26b5c002" protocol=ttrpc version=3 Jul 11 04:45:36.354567 systemd[1]: Started cri-containerd-3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59.scope - libcontainer container 3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59. Jul 11 04:45:36.374342 systemd[1]: cri-containerd-3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59.scope: Deactivated successfully. Jul 11 04:45:36.375163 containerd[1557]: time="2025-07-11T04:45:36.374888026Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\" id:\"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\" pid:4681 exited_at:{seconds:1752209136 nanos:374601742}" Jul 11 04:45:36.375657 containerd[1557]: time="2025-07-11T04:45:36.375635356Z" level=info msg="received exit event container_id:\"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\" id:\"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\" pid:4681 exited_at:{seconds:1752209136 nanos:374601742}" Jul 11 04:45:36.382198 containerd[1557]: time="2025-07-11T04:45:36.382165484Z" level=info msg="StartContainer for \"3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59\" returns successfully" Jul 11 04:45:36.419214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b44f9087305b5af5db68cac376125aa0610259d63b7d04d57a43706ca7daa59-rootfs.mount: Deactivated successfully. Jul 11 04:45:37.314151 kubelet[2675]: E0711 04:45:37.314124 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:37.317448 containerd[1557]: time="2025-07-11T04:45:37.317406497Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 04:45:37.329430 containerd[1557]: time="2025-07-11T04:45:37.328907089Z" level=info msg="Container 0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b: CDI devices from CRI Config.CDIDevices: []" Jul 11 04:45:37.336933 containerd[1557]: time="2025-07-11T04:45:37.336826073Z" level=info msg="CreateContainer within sandbox \"e30e98c8adda8cabb0cbf4d7ce44bad90593a96a1868efe736d835327fbf33dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\"" Jul 11 04:45:37.337444 containerd[1557]: time="2025-07-11T04:45:37.337414601Z" level=info msg="StartContainer for \"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\"" Jul 11 04:45:37.338508 containerd[1557]: time="2025-07-11T04:45:37.338483535Z" level=info msg="connecting to shim 0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b" address="unix:///run/containerd/s/4377ef751cebc0e2c035eb8323983942544e310fd8ef05d85780d36a26b5c002" protocol=ttrpc version=3 Jul 11 04:45:37.357479 systemd[1]: Started cri-containerd-0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b.scope - libcontainer container 0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b. Jul 11 04:45:37.388613 containerd[1557]: time="2025-07-11T04:45:37.388496674Z" level=info msg="StartContainer for \"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\" returns successfully" Jul 11 04:45:37.451952 containerd[1557]: time="2025-07-11T04:45:37.451915230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\" id:\"ae3ef323d468f2baaf4baeff919e16eb803d5e7beb6ae2118aa5369207dedd21\" pid:4748 exited_at:{seconds:1752209137 nanos:451627866}" Jul 11 04:45:37.657340 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 11 04:45:37.896599 kubelet[2675]: I0711 04:45:37.896544 2675 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T04:45:37Z","lastTransitionTime":"2025-07-11T04:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 04:45:38.319938 kubelet[2675]: E0711 04:45:38.319903 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:38.335329 kubelet[2675]: I0711 04:45:38.335163 2675 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-29m8q" podStartSLOduration=5.335143347 podStartE2EDuration="5.335143347s" podCreationTimestamp="2025-07-11 04:45:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 04:45:38.334483179 +0000 UTC m=+82.402499222" watchObservedRunningTime="2025-07-11 04:45:38.335143347 +0000 UTC m=+82.403159390" Jul 11 04:45:39.400463 kubelet[2675]: E0711 04:45:39.400409 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:39.660365 containerd[1557]: time="2025-07-11T04:45:39.660233226Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\" id:\"b3aa23209ac183c151bcef2de67a2799233e9670694238825299aaecee0c796b\" pid:5007 exit_status:1 exited_at:{seconds:1752209139 nanos:659841461}" Jul 11 04:45:40.628060 systemd-networkd[1438]: lxc_health: Link UP Jul 11 04:45:40.638292 systemd-networkd[1438]: lxc_health: Gained carrier Jul 11 04:45:41.401254 kubelet[2675]: E0711 04:45:41.400958 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:41.793857 containerd[1557]: time="2025-07-11T04:45:41.793803563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\" id:\"4abc2b1c1ac07258202cddd2ad6b6e3de82b028bcbc34b8fefa302257c943c47\" pid:5284 exited_at:{seconds:1752209141 nanos:793448598}" Jul 11 04:45:42.329190 kubelet[2675]: E0711 04:45:42.329148 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:42.471435 systemd-networkd[1438]: lxc_health: Gained IPv6LL Jul 11 04:45:43.332291 kubelet[2675]: E0711 04:45:43.332264 2675 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 04:45:43.895233 containerd[1557]: time="2025-07-11T04:45:43.895194800Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\" id:\"8c9fef554a68aaf47c56c7e0bcddd1341c87472d9e4349bb1906bcd492e85140\" pid:5310 exited_at:{seconds:1752209143 nanos:894832715}" Jul 11 04:45:46.025813 containerd[1557]: time="2025-07-11T04:45:46.025742695Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f50bc1ac5acfe22715fda75d0f2494a12a8498691904e5b548e8d18253bbf9b\" id:\"63d0c1965c44c4fce250033be950625e440370ac1cbce522ba480cf882251a96\" pid:5343 exited_at:{seconds:1752209146 nanos:24926726}" Jul 11 04:45:46.029582 sshd[4484]: Connection closed by 10.0.0.1 port 44496 Jul 11 04:45:46.030188 sshd-session[4476]: pam_unix(sshd:session): session closed for user core Jul 11 04:45:46.034204 systemd-logind[1514]: Session 26 logged out. Waiting for processes to exit. Jul 11 04:45:46.034445 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:44496.service: Deactivated successfully. Jul 11 04:45:46.036017 systemd[1]: session-26.scope: Deactivated successfully. Jul 11 04:45:46.037623 systemd-logind[1514]: Removed session 26.