Jul 15 23:16:56.819892 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 23:16:56.819912 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:16:56.819922 kernel: KASLR enabled Jul 15 23:16:56.819927 kernel: efi: EFI v2.7 by EDK II Jul 15 23:16:56.819933 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 15 23:16:56.819938 kernel: random: crng init done Jul 15 23:16:56.819945 kernel: secureboot: Secure boot disabled Jul 15 23:16:56.819951 kernel: ACPI: Early table checksum verification disabled Jul 15 23:16:56.819956 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 15 23:16:56.819963 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 23:16:56.819969 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.819975 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.819980 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.819986 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.819993 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.820001 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.820007 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.820013 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.820019 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:16:56.820025 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 23:16:56.820031 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:16:56.820037 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:16:56.820043 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 15 23:16:56.820049 kernel: Zone ranges: Jul 15 23:16:56.820055 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:16:56.820063 kernel: DMA32 empty Jul 15 23:16:56.820069 kernel: Normal empty Jul 15 23:16:56.820075 kernel: Device empty Jul 15 23:16:56.820080 kernel: Movable zone start for each node Jul 15 23:16:56.820086 kernel: Early memory node ranges Jul 15 23:16:56.820092 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 15 23:16:56.820098 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 15 23:16:56.820104 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 15 23:16:56.820110 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 15 23:16:56.820116 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 15 23:16:56.820122 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 15 23:16:56.820128 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 15 23:16:56.820135 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 15 23:16:56.820141 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 15 23:16:56.820147 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 15 23:16:56.820156 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 15 23:16:56.820162 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 15 23:16:56.820169 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 23:16:56.820177 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:16:56.820183 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 23:16:56.820189 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 15 23:16:56.820196 kernel: psci: probing for conduit method from ACPI. Jul 15 23:16:56.820202 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:16:56.820221 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:16:56.820228 kernel: psci: Trusted OS migration not required Jul 15 23:16:56.820234 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:16:56.820240 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 23:16:56.820247 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:16:56.820256 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:16:56.820262 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 23:16:56.820269 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:16:56.820275 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:16:56.820282 kernel: CPU features: detected: Spectre-v4 Jul 15 23:16:56.820288 kernel: CPU features: detected: Spectre-BHB Jul 15 23:16:56.820294 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:16:56.820301 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:16:56.820307 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 23:16:56.820313 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:16:56.820325 kernel: alternatives: applying boot alternatives Jul 15 23:16:56.820333 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:16:56.820341 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:16:56.820348 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:16:56.820354 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:16:56.820361 kernel: Fallback order for Node 0: 0 Jul 15 23:16:56.820367 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 23:16:56.820373 kernel: Policy zone: DMA Jul 15 23:16:56.820380 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:16:56.820386 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 23:16:56.820393 kernel: software IO TLB: area num 4. Jul 15 23:16:56.820399 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 23:16:56.820406 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 15 23:16:56.820414 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:16:56.820420 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:16:56.820427 kernel: rcu: RCU event tracing is enabled. Jul 15 23:16:56.820434 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:16:56.820440 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:16:56.820447 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:16:56.820453 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:16:56.820459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:16:56.820466 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:16:56.820472 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:16:56.820479 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:16:56.820486 kernel: GICv3: 256 SPIs implemented Jul 15 23:16:56.820493 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:16:56.820499 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:16:56.820506 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 23:16:56.820512 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:16:56.820518 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 23:16:56.820524 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 23:16:56.820531 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:16:56.820538 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:16:56.820544 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 23:16:56.820550 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 23:16:56.820557 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:16:56.820564 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:16:56.820571 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 23:16:56.820578 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 23:16:56.820584 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 23:16:56.820590 kernel: arm-pv: using stolen time PV Jul 15 23:16:56.820597 kernel: Console: colour dummy device 80x25 Jul 15 23:16:56.820603 kernel: ACPI: Core revision 20240827 Jul 15 23:16:56.820610 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 23:16:56.820617 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:16:56.820623 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:16:56.820631 kernel: landlock: Up and running. Jul 15 23:16:56.820638 kernel: SELinux: Initializing. Jul 15 23:16:56.820644 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:16:56.820651 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:16:56.820657 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:16:56.820664 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:16:56.820671 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:16:56.820677 kernel: Remapping and enabling EFI services. Jul 15 23:16:56.820684 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:16:56.820696 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:16:56.820703 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 23:16:56.820709 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 23:16:56.820718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:16:56.820724 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 23:16:56.820731 kernel: Detected PIPT I-cache on CPU2 Jul 15 23:16:56.820738 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 23:16:56.820746 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 23:16:56.820754 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:16:56.820761 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 23:16:56.820767 kernel: Detected PIPT I-cache on CPU3 Jul 15 23:16:56.820774 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 23:16:56.820781 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 23:16:56.820788 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:16:56.820795 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 23:16:56.820802 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:16:56.820809 kernel: SMP: Total of 4 processors activated. Jul 15 23:16:56.820817 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:16:56.820824 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:16:56.820831 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:16:56.820838 kernel: CPU features: detected: Common not Private translations Jul 15 23:16:56.820844 kernel: CPU features: detected: CRC32 instructions Jul 15 23:16:56.820851 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 23:16:56.820858 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:16:56.820865 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:16:56.820872 kernel: CPU features: detected: Privileged Access Never Jul 15 23:16:56.820880 kernel: CPU features: detected: RAS Extension Support Jul 15 23:16:56.820887 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:16:56.820894 kernel: alternatives: applying system-wide alternatives Jul 15 23:16:56.820901 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 23:16:56.820908 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Jul 15 23:16:56.820915 kernel: devtmpfs: initialized Jul 15 23:16:56.820922 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:16:56.820929 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:16:56.820936 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:16:56.820944 kernel: 0 pages in range for non-PLT usage Jul 15 23:16:56.820951 kernel: 508432 pages in range for PLT usage Jul 15 23:16:56.820958 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:16:56.820964 kernel: SMBIOS 3.0.0 present. Jul 15 23:16:56.820971 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 23:16:56.820978 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:16:56.820985 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:16:56.820992 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:16:56.820998 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:16:56.821007 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:16:56.821014 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:16:56.821020 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 15 23:16:56.821027 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:16:56.821034 kernel: cpuidle: using governor menu Jul 15 23:16:56.821041 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:16:56.821048 kernel: ASID allocator initialised with 32768 entries Jul 15 23:16:56.821055 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:16:56.821061 kernel: Serial: AMBA PL011 UART driver Jul 15 23:16:56.821069 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:16:56.821076 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:16:56.821083 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:16:56.821090 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:16:56.821097 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:16:56.821104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:16:56.821110 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:16:56.821117 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:16:56.821124 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:16:56.821132 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:16:56.821139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:16:56.821146 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:16:56.821153 kernel: ACPI: Interpreter enabled Jul 15 23:16:56.821159 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:16:56.821166 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:16:56.821173 kernel: ACPI: CPU0 has been hot-added Jul 15 23:16:56.821180 kernel: ACPI: CPU1 has been hot-added Jul 15 23:16:56.821187 kernel: ACPI: CPU2 has been hot-added Jul 15 23:16:56.821193 kernel: ACPI: CPU3 has been hot-added Jul 15 23:16:56.821202 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:16:56.821798 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:16:56.821809 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:16:56.821947 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:16:56.822013 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:16:56.822071 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:16:56.822127 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 23:16:56.822188 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 23:16:56.822197 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 23:16:56.822216 kernel: PCI host bridge to bus 0000:00 Jul 15 23:16:56.822286 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 23:16:56.822350 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:16:56.822404 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 23:16:56.822457 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:16:56.822537 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:16:56.822607 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:16:56.822668 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 23:16:56.822727 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 23:16:56.822786 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 23:16:56.822845 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 23:16:56.822904 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 23:16:56.822965 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 23:16:56.823019 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 23:16:56.823071 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:16:56.823123 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 23:16:56.823132 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:16:56.823139 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:16:56.823146 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:16:56.823155 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:16:56.823162 kernel: iommu: Default domain type: Translated Jul 15 23:16:56.823169 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:16:56.823176 kernel: efivars: Registered efivars operations Jul 15 23:16:56.823182 kernel: vgaarb: loaded Jul 15 23:16:56.823189 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:16:56.823196 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:16:56.823203 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:16:56.823224 kernel: pnp: PnP ACPI init Jul 15 23:16:56.823292 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 23:16:56.823302 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:16:56.823309 kernel: NET: Registered PF_INET protocol family Jul 15 23:16:56.823322 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:16:56.823330 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:16:56.823337 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:16:56.823344 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:16:56.823351 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:16:56.823360 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:16:56.823368 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:16:56.823375 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:16:56.823382 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:16:56.823388 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:16:56.823395 kernel: kvm [1]: HYP mode not available Jul 15 23:16:56.823402 kernel: Initialise system trusted keyrings Jul 15 23:16:56.823409 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:16:56.823416 kernel: Key type asymmetric registered Jul 15 23:16:56.823424 kernel: Asymmetric key parser 'x509' registered Jul 15 23:16:56.823431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:16:56.823438 kernel: io scheduler mq-deadline registered Jul 15 23:16:56.823445 kernel: io scheduler kyber registered Jul 15 23:16:56.823452 kernel: io scheduler bfq registered Jul 15 23:16:56.823459 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:16:56.823466 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:16:56.823473 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:16:56.823539 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 23:16:56.823549 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:16:56.823556 kernel: thunder_xcv, ver 1.0 Jul 15 23:16:56.823563 kernel: thunder_bgx, ver 1.0 Jul 15 23:16:56.823570 kernel: nicpf, ver 1.0 Jul 15 23:16:56.823576 kernel: nicvf, ver 1.0 Jul 15 23:16:56.823649 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:16:56.823721 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:16:56 UTC (1752621416) Jul 15 23:16:56.823730 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:16:56.823737 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:16:56.823748 kernel: watchdog: NMI not fully supported Jul 15 23:16:56.823755 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:16:56.823762 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:16:56.823769 kernel: Segment Routing with IPv6 Jul 15 23:16:56.823776 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:16:56.823783 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:16:56.823790 kernel: Key type dns_resolver registered Jul 15 23:16:56.823797 kernel: registered taskstats version 1 Jul 15 23:16:56.823804 kernel: Loading compiled-in X.509 certificates Jul 15 23:16:56.823812 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:16:56.823819 kernel: Demotion targets for Node 0: null Jul 15 23:16:56.823826 kernel: Key type .fscrypt registered Jul 15 23:16:56.823833 kernel: Key type fscrypt-provisioning registered Jul 15 23:16:56.823840 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:16:56.823847 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:16:56.823854 kernel: ima: No architecture policies found Jul 15 23:16:56.823860 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:16:56.823869 kernel: clk: Disabling unused clocks Jul 15 23:16:56.823876 kernel: PM: genpd: Disabling unused power domains Jul 15 23:16:56.823883 kernel: Warning: unable to open an initial console. Jul 15 23:16:56.823890 kernel: Freeing unused kernel memory: 39488K Jul 15 23:16:56.823896 kernel: Run /init as init process Jul 15 23:16:56.823903 kernel: with arguments: Jul 15 23:16:56.823910 kernel: /init Jul 15 23:16:56.823917 kernel: with environment: Jul 15 23:16:56.823923 kernel: HOME=/ Jul 15 23:16:56.823931 kernel: TERM=linux Jul 15 23:16:56.823939 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:16:56.823947 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:16:56.823957 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:16:56.823965 systemd[1]: Detected virtualization kvm. Jul 15 23:16:56.823972 systemd[1]: Detected architecture arm64. Jul 15 23:16:56.823978 systemd[1]: Running in initrd. Jul 15 23:16:56.823986 systemd[1]: No hostname configured, using default hostname. Jul 15 23:16:56.823995 systemd[1]: Hostname set to . Jul 15 23:16:56.824002 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:16:56.824009 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:16:56.824017 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:16:56.824024 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:16:56.824032 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:16:56.824039 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:16:56.824047 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:16:56.824056 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:16:56.824064 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:16:56.824072 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:16:56.824080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:16:56.824087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:16:56.824094 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:16:56.824102 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:16:56.824111 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:16:56.824118 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:16:56.824126 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:16:56.824133 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:16:56.824141 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:16:56.824148 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:16:56.824156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:16:56.824163 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:16:56.824172 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:16:56.824180 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:16:56.824187 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:16:56.824195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:16:56.824202 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:16:56.824230 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:16:56.824238 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:16:56.824245 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:16:56.824252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:16:56.824262 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:16:56.824269 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:16:56.824277 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:16:56.824284 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:16:56.824293 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:16:56.824324 systemd-journald[245]: Collecting audit messages is disabled. Jul 15 23:16:56.824344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:16:56.824352 systemd-journald[245]: Journal started Jul 15 23:16:56.824372 systemd-journald[245]: Runtime Journal (/run/log/journal/7008282d9b104a51885d5b32fd056d49) is 6M, max 48.5M, 42.4M free. Jul 15 23:16:56.815821 systemd-modules-load[247]: Inserted module 'overlay' Jul 15 23:16:56.827048 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:16:56.829192 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:16:56.832339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:16:56.836473 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:16:56.837237 kernel: Bridge firewalling registered Jul 15 23:16:56.837263 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 15 23:16:56.841356 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:16:56.844445 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:16:56.846981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:16:56.848472 systemd-tmpfiles[264]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:16:56.849453 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:16:56.854077 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:16:56.858400 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:16:56.859819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:16:56.862800 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:16:56.865084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:16:56.867470 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:16:56.894835 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:16:56.910814 systemd-resolved[289]: Positive Trust Anchors: Jul 15 23:16:56.910830 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:16:56.910862 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:16:56.915527 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 15 23:16:56.916476 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:16:56.920176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:16:56.972225 kernel: SCSI subsystem initialized Jul 15 23:16:56.976237 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:16:56.986245 kernel: iscsi: registered transport (tcp) Jul 15 23:16:56.999274 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:16:56.999315 kernel: QLogic iSCSI HBA Driver Jul 15 23:16:57.015401 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:16:57.031490 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:16:57.033627 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:16:57.076440 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:16:57.078734 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:16:57.141244 kernel: raid6: neonx8 gen() 15714 MB/s Jul 15 23:16:57.158229 kernel: raid6: neonx4 gen() 15735 MB/s Jul 15 23:16:57.175230 kernel: raid6: neonx2 gen() 13160 MB/s Jul 15 23:16:57.192236 kernel: raid6: neonx1 gen() 10447 MB/s Jul 15 23:16:57.209234 kernel: raid6: int64x8 gen() 6859 MB/s Jul 15 23:16:57.226235 kernel: raid6: int64x4 gen() 7316 MB/s Jul 15 23:16:57.243232 kernel: raid6: int64x2 gen() 6055 MB/s Jul 15 23:16:57.260424 kernel: raid6: int64x1 gen() 5015 MB/s Jul 15 23:16:57.260440 kernel: raid6: using algorithm neonx4 gen() 15735 MB/s Jul 15 23:16:57.278342 kernel: raid6: .... xor() 12269 MB/s, rmw enabled Jul 15 23:16:57.278371 kernel: raid6: using neon recovery algorithm Jul 15 23:16:57.283227 kernel: xor: measuring software checksum speed Jul 15 23:16:57.284521 kernel: 8regs : 17738 MB/sec Jul 15 23:16:57.284535 kernel: 32regs : 21636 MB/sec Jul 15 23:16:57.285766 kernel: arm64_neon : 27984 MB/sec Jul 15 23:16:57.285779 kernel: xor: using function: arm64_neon (27984 MB/sec) Jul 15 23:16:57.340231 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:16:57.345956 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:16:57.348517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:16:57.371986 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 15 23:16:57.376150 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:16:57.378557 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:16:57.409707 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jul 15 23:16:57.432249 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:16:57.434685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:16:57.485365 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:16:57.490432 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:16:57.535690 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 23:16:57.536438 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:16:57.544980 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:16:57.545013 kernel: GPT:9289727 != 19775487 Jul 15 23:16:57.545189 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:16:57.545341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:16:57.548689 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:16:57.548688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:16:57.550843 kernel: GPT:9289727 != 19775487 Jul 15 23:16:57.550980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:16:57.554162 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:16:57.554179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:16:57.579473 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:16:57.581011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:16:57.589135 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:16:57.597948 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:16:57.611035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:16:57.618913 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:16:57.620140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:16:57.622628 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:16:57.625591 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:16:57.627736 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:16:57.630673 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:16:57.632654 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:16:57.652332 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:16:57.697892 disk-uuid[590]: Primary Header is updated. Jul 15 23:16:57.697892 disk-uuid[590]: Secondary Entries is updated. Jul 15 23:16:57.697892 disk-uuid[590]: Secondary Header is updated. Jul 15 23:16:57.701226 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:16:58.711950 disk-uuid[598]: The operation has completed successfully. Jul 15 23:16:58.713300 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:16:58.735768 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:16:58.736949 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:16:58.768356 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:16:58.780246 sh[611]: Success Jul 15 23:16:58.795691 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:16:58.795741 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:16:58.796849 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:16:58.811994 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:16:58.835015 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:16:58.837829 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:16:58.856688 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:16:58.861250 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:16:58.861280 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Jul 15 23:16:58.864088 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:16:58.864117 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:16:58.865662 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:16:58.868884 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:16:58.870103 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:16:58.871492 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:16:58.872187 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:16:58.873673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:16:58.899248 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 15 23:16:58.901967 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:16:58.902001 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:16:58.902012 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:16:58.909219 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:16:58.910052 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:16:58.912145 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:16:58.980752 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:16:58.985397 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:16:59.026988 systemd-networkd[797]: lo: Link UP Jul 15 23:16:59.027001 systemd-networkd[797]: lo: Gained carrier Jul 15 23:16:59.027803 systemd-networkd[797]: Enumeration completed Jul 15 23:16:59.027978 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:16:59.029632 systemd[1]: Reached target network.target - Network. Jul 15 23:16:59.030060 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:16:59.030065 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:16:59.031248 systemd-networkd[797]: eth0: Link UP Jul 15 23:16:59.031251 systemd-networkd[797]: eth0: Gained carrier Jul 15 23:16:59.031263 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:16:59.052263 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:16:59.053165 ignition[698]: Ignition 2.21.0 Jul 15 23:16:59.053171 ignition[698]: Stage: fetch-offline Jul 15 23:16:59.053202 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:16:59.053285 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:16:59.053497 ignition[698]: parsed url from cmdline: "" Jul 15 23:16:59.053500 ignition[698]: no config URL provided Jul 15 23:16:59.053505 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:16:59.053511 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:16:59.053529 ignition[698]: op(1): [started] loading QEMU firmware config module Jul 15 23:16:59.053534 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:16:59.062224 ignition[698]: op(1): [finished] loading QEMU firmware config module Jul 15 23:16:59.098371 ignition[698]: parsing config with SHA512: 44e66ef04f89d170684115c0657494bd5d8b67769d9355b765aeb8f0c737ae5ab124a7d0f16c15065ba3e76b3b1ef2544302c89d1c8a5a4fc8998c94184db3f6 Jul 15 23:16:59.104196 unknown[698]: fetched base config from "system" Jul 15 23:16:59.104241 unknown[698]: fetched user config from "qemu" Jul 15 23:16:59.104633 ignition[698]: fetch-offline: fetch-offline passed Jul 15 23:16:59.104686 ignition[698]: Ignition finished successfully Jul 15 23:16:59.106992 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:16:59.108576 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:16:59.109398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:16:59.132805 ignition[810]: Ignition 2.21.0 Jul 15 23:16:59.132819 ignition[810]: Stage: kargs Jul 15 23:16:59.132993 ignition[810]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:16:59.133002 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:16:59.134520 ignition[810]: kargs: kargs passed Jul 15 23:16:59.134575 ignition[810]: Ignition finished successfully Jul 15 23:16:59.136271 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:16:59.138796 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:16:59.174419 ignition[818]: Ignition 2.21.0 Jul 15 23:16:59.174435 ignition[818]: Stage: disks Jul 15 23:16:59.174573 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:16:59.174583 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:16:59.177958 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:16:59.175791 ignition[818]: disks: disks passed Jul 15 23:16:59.179584 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:16:59.175839 ignition[818]: Ignition finished successfully Jul 15 23:16:59.181225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:16:59.182967 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:16:59.184823 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:16:59.186380 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:16:59.189258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:16:59.225311 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:16:59.229451 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:16:59.231927 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:16:59.296235 kernel: EXT4-fs (vda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:16:59.296330 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:16:59.297604 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:16:59.300052 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:16:59.301758 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:16:59.302786 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:16:59.302828 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:16:59.302867 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:16:59.313985 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:16:59.316630 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:16:59.319731 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 15 23:16:59.322568 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:16:59.322610 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:16:59.323366 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:16:59.326345 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:16:59.361243 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:16:59.365594 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:16:59.369401 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:16:59.373254 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:16:59.449189 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:16:59.451325 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:16:59.452970 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:16:59.480244 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:16:59.495282 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:16:59.501983 ignition[951]: INFO : Ignition 2.21.0 Jul 15 23:16:59.501983 ignition[951]: INFO : Stage: mount Jul 15 23:16:59.503751 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:16:59.503751 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:16:59.503751 ignition[951]: INFO : mount: mount passed Jul 15 23:16:59.503751 ignition[951]: INFO : Ignition finished successfully Jul 15 23:16:59.506246 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:16:59.508805 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:16:59.861354 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:16:59.862817 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:16:59.894155 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (963) Jul 15 23:16:59.894192 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:16:59.894217 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:16:59.895895 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:16:59.898393 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:16:59.926746 ignition[980]: INFO : Ignition 2.21.0 Jul 15 23:16:59.926746 ignition[980]: INFO : Stage: files Jul 15 23:16:59.928976 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:16:59.928976 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:16:59.931118 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:16:59.932222 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:16:59.932222 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:16:59.935139 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:16:59.936440 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:16:59.936440 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:16:59.935701 unknown[980]: wrote ssh authorized keys file for user: core Jul 15 23:16:59.940051 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 23:16:59.940051 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 15 23:17:00.760459 systemd-networkd[797]: eth0: Gained IPv6LL Jul 15 23:17:00.804914 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:17:03.729441 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 15 23:17:03.729441 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:17:03.733332 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 23:17:03.905424 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:17:03.990976 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:17:03.990976 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:17:03.994553 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:17:04.006803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:17:04.006803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:17:04.006803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:17:04.006803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:17:04.006803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:17:04.006803 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 15 23:17:04.432762 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:17:04.921117 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 15 23:17:04.921117 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:17:04.924781 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:17:04.960430 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:17:04.960430 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:17:04.960430 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 23:17:04.960430 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:17:04.968853 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:17:04.968853 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 23:17:04.968853 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:17:04.988140 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:17:04.993048 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:17:04.994670 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:17:04.994670 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:17:04.994670 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:17:04.994670 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:17:04.994670 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:17:04.994670 ignition[980]: INFO : files: files passed Jul 15 23:17:04.994670 ignition[980]: INFO : Ignition finished successfully Jul 15 23:17:04.999093 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:17:05.007608 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:17:05.030795 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:17:05.034407 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:17:05.036250 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:17:05.038509 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:17:05.039885 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:17:05.039885 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:17:05.042839 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:17:05.043033 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:17:05.045649 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:17:05.047383 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:17:05.102908 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:17:05.103989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:17:05.105585 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:17:05.107367 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:17:05.109098 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:17:05.109909 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:17:05.146111 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:17:05.148499 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:17:05.168705 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:17:05.169854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:17:05.171771 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:17:05.173423 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:17:05.173540 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:17:05.175915 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:17:05.177875 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:17:05.179404 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:17:05.180974 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:17:05.182790 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:17:05.184607 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:17:05.186374 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:17:05.188075 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:17:05.189890 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:17:05.191717 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:17:05.193378 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:17:05.194825 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:17:05.194951 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:17:05.197131 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:17:05.199014 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:17:05.200856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:17:05.200961 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:17:05.202837 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:17:05.202953 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:17:05.205597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:17:05.205709 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:17:05.207489 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:17:05.209019 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:17:05.212278 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:17:05.214069 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:17:05.216121 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:17:05.217679 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:17:05.217763 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:17:05.219246 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:17:05.219333 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:17:05.220829 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:17:05.220949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:17:05.222758 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:17:05.222856 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:17:05.225080 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:17:05.227480 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:17:05.228703 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:17:05.228821 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:17:05.230574 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:17:05.230668 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:17:05.236915 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:17:05.240904 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:17:05.252043 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:17:05.254166 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:17:05.256098 ignition[1036]: INFO : Ignition 2.21.0 Jul 15 23:17:05.256098 ignition[1036]: INFO : Stage: umount Jul 15 23:17:05.256098 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:17:05.256098 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:17:05.254272 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:17:05.261261 ignition[1036]: INFO : umount: umount passed Jul 15 23:17:05.261261 ignition[1036]: INFO : Ignition finished successfully Jul 15 23:17:05.260475 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:17:05.260574 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:17:05.265695 systemd[1]: Stopped target network.target - Network. Jul 15 23:17:05.266622 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:17:05.266687 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:17:05.268226 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:17:05.268278 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:17:05.270449 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:17:05.270505 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:17:05.272122 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:17:05.272163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:17:05.273881 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:17:05.273933 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:17:05.275952 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:17:05.277440 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:17:05.292403 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:17:05.292539 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:17:05.304829 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:17:05.305057 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:17:05.305142 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:17:05.309052 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:17:05.309582 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:17:05.311469 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:17:05.311526 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:17:05.315366 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:17:05.316229 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:17:05.316298 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:17:05.319446 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:17:05.319495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:17:05.322330 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:17:05.322370 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:17:05.324185 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:17:05.324238 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:17:05.327022 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:17:05.329764 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:17:05.329815 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:17:05.337795 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:17:05.337901 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:17:05.339866 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:17:05.339976 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:17:05.341966 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:17:05.342032 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:17:05.343265 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:17:05.343309 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:17:05.345194 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:17:05.345259 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:17:05.347887 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:17:05.347933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:17:05.350539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:17:05.350590 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:17:05.353966 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:17:05.355068 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:17:05.355122 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:17:05.358041 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:17:05.358090 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:17:05.361361 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 23:17:05.361406 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:17:05.364923 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:17:05.364962 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:17:05.367083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:17:05.367127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:17:05.371233 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 23:17:05.371283 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 23:17:05.371326 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 23:17:05.371360 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:17:05.375278 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:17:05.377238 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:17:05.379703 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:17:05.382419 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:17:05.408523 systemd[1]: Switching root. Jul 15 23:17:05.443253 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 15 23:17:05.443322 systemd-journald[245]: Journal stopped Jul 15 23:17:06.184439 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:17:06.184494 kernel: SELinux: policy capability open_perms=1 Jul 15 23:17:06.184504 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:17:06.184516 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:17:06.184532 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:17:06.184543 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:17:06.184557 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:17:06.184566 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:17:06.184576 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:17:06.184585 kernel: audit: type=1403 audit(1752621425.588:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:17:06.184600 systemd[1]: Successfully loaded SELinux policy in 42.665ms. Jul 15 23:17:06.184618 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.132ms. Jul 15 23:17:06.184630 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:17:06.184641 systemd[1]: Detected virtualization kvm. Jul 15 23:17:06.184651 systemd[1]: Detected architecture arm64. Jul 15 23:17:06.184663 systemd[1]: Detected first boot. Jul 15 23:17:06.184674 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:17:06.184685 zram_generator::config[1111]: No configuration found. Jul 15 23:17:06.184695 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:17:06.184704 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:17:06.184714 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:17:06.184724 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:17:06.184734 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:17:06.184746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:17:06.184756 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:17:06.184767 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:17:06.184782 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:17:06.184792 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:17:06.184802 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:17:06.184812 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:17:06.184823 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:17:06.184834 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:17:06.184846 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:17:06.184856 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:17:06.184866 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:17:06.184876 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:17:06.184886 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:17:06.184896 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:17:06.184906 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:17:06.184916 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:17:06.184927 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:17:06.184937 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:17:06.184947 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:17:06.184957 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:17:06.184967 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:17:06.184977 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:17:06.184988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:17:06.185000 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:17:06.185013 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:17:06.185024 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:17:06.185035 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:17:06.185045 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:17:06.185056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:17:06.185066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:17:06.185076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:17:06.185086 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:17:06.185096 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:17:06.185107 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:17:06.185117 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:17:06.185127 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:17:06.185136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:17:06.185146 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:17:06.185157 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:17:06.185167 systemd[1]: Reached target machines.target - Containers. Jul 15 23:17:06.185177 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:17:06.185192 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:17:06.185203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:17:06.185230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:17:06.185240 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:17:06.185251 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:17:06.185261 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:17:06.185271 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:17:06.185281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:17:06.185296 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:17:06.185310 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:17:06.185321 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:17:06.185331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:17:06.185341 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:17:06.185351 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:17:06.185361 kernel: fuse: init (API version 7.41) Jul 15 23:17:06.185370 kernel: loop: module loaded Jul 15 23:17:06.185379 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:17:06.185390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:17:06.185401 kernel: ACPI: bus type drm_connector registered Jul 15 23:17:06.185411 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:17:06.185421 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:17:06.185431 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:17:06.185441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:17:06.185454 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:17:06.185464 systemd[1]: Stopped verity-setup.service. Jul 15 23:17:06.185474 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:17:06.185484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:17:06.185494 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:17:06.185529 systemd-journald[1176]: Collecting audit messages is disabled. Jul 15 23:17:06.185553 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:17:06.185564 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:17:06.185574 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:17:06.185585 systemd-journald[1176]: Journal started Jul 15 23:17:06.185606 systemd-journald[1176]: Runtime Journal (/run/log/journal/7008282d9b104a51885d5b32fd056d49) is 6M, max 48.5M, 42.4M free. Jul 15 23:17:05.959438 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:17:05.972014 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:17:05.972387 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:17:06.188880 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:17:06.189751 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:17:06.191397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:17:06.192980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:17:06.193150 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:17:06.194604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:17:06.194760 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:17:06.196124 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:17:06.197308 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:17:06.198632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:17:06.198802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:17:06.200324 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:17:06.200485 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:17:06.201837 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:17:06.202002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:17:06.203542 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:17:06.204978 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:17:06.206674 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:17:06.209163 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:17:06.222675 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:17:06.225474 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:17:06.227601 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:17:06.228745 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:17:06.228783 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:17:06.230699 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:17:06.237071 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:17:06.238621 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:17:06.239898 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:17:06.241934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:17:06.243104 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:17:06.244312 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:17:06.246462 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:17:06.247507 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:17:06.252354 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:17:06.255651 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:17:06.258515 systemd-journald[1176]: Time spent on flushing to /var/log/journal/7008282d9b104a51885d5b32fd056d49 is 25.880ms for 892 entries. Jul 15 23:17:06.258515 systemd-journald[1176]: System Journal (/var/log/journal/7008282d9b104a51885d5b32fd056d49) is 8M, max 195.6M, 187.6M free. Jul 15 23:17:06.295455 systemd-journald[1176]: Received client request to flush runtime journal. Jul 15 23:17:06.295506 kernel: loop0: detected capacity change from 0 to 138376 Jul 15 23:17:06.266481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:17:06.268248 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:17:06.272997 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:17:06.274842 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:17:06.280861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:17:06.283610 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:17:06.286414 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:17:06.297595 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:17:06.300821 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jul 15 23:17:06.300839 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Jul 15 23:17:06.308111 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:17:06.306036 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:17:06.310355 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:17:06.325590 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:17:06.337307 kernel: loop1: detected capacity change from 0 to 107312 Jul 15 23:17:06.348186 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:17:06.351800 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:17:06.368235 kernel: loop2: detected capacity change from 0 to 211168 Jul 15 23:17:06.383034 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 15 23:17:06.383047 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jul 15 23:17:06.387354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:17:06.402453 kernel: loop3: detected capacity change from 0 to 138376 Jul 15 23:17:06.410766 kernel: loop4: detected capacity change from 0 to 107312 Jul 15 23:17:06.419233 kernel: loop5: detected capacity change from 0 to 211168 Jul 15 23:17:06.423753 (sd-merge)[1254]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:17:06.424126 (sd-merge)[1254]: Merged extensions into '/usr'. Jul 15 23:17:06.427960 systemd[1]: Reload requested from client PID 1227 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:17:06.427975 systemd[1]: Reloading... Jul 15 23:17:06.467229 zram_generator::config[1280]: No configuration found. Jul 15 23:17:06.563800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:17:06.580228 ldconfig[1222]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:17:06.628077 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:17:06.628164 systemd[1]: Reloading finished in 199 ms. Jul 15 23:17:06.657737 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:17:06.659225 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:17:06.677417 systemd[1]: Starting ensure-sysext.service... Jul 15 23:17:06.679128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:17:06.693545 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:17:06.693576 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:17:06.693669 systemd[1]: Reload requested from client PID 1315 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:17:06.693679 systemd[1]: Reloading... Jul 15 23:17:06.693770 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:17:06.693950 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:17:06.694560 systemd-tmpfiles[1316]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:17:06.694763 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jul 15 23:17:06.694812 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jul 15 23:17:06.697647 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:17:06.697658 systemd-tmpfiles[1316]: Skipping /boot Jul 15 23:17:06.705918 systemd-tmpfiles[1316]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:17:06.705935 systemd-tmpfiles[1316]: Skipping /boot Jul 15 23:17:06.743233 zram_generator::config[1343]: No configuration found. Jul 15 23:17:06.805615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:17:06.868450 systemd[1]: Reloading finished in 174 ms. Jul 15 23:17:06.888626 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:17:06.890189 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:17:06.909294 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:17:06.911570 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:17:06.915353 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:17:06.918340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:17:06.921034 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:17:06.930346 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:17:06.937463 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:17:06.939943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:17:06.940997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:17:06.944002 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:17:06.946189 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:17:06.947456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:17:06.947569 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:17:06.948330 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:17:06.950238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:17:06.951871 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:17:06.952000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:17:06.957199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:17:06.963349 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:17:06.965131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:17:06.967344 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:17:06.969390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:17:06.971580 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Jul 15 23:17:06.980860 systemd[1]: Finished ensure-sysext.service. Jul 15 23:17:06.982438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:17:06.984371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:17:06.986302 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:17:06.988444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:17:06.992140 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:17:06.993412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:17:06.993450 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:17:07.009176 augenrules[1422]: No rules Jul 15 23:17:07.011121 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:17:07.014015 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:17:07.015415 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:17:07.017651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:17:07.019532 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:17:07.021224 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:17:07.022529 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:17:07.024012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:17:07.025254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:17:07.026980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:17:07.031488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:17:07.032890 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:17:07.033032 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:17:07.034489 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:17:07.034642 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:17:07.043265 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:17:07.051165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:17:07.054334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:17:07.054386 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:17:07.054406 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:17:07.092834 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:17:07.147638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:17:07.150709 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:17:07.178499 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:17:07.180313 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:17:07.193818 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:17:07.213674 systemd-resolved[1383]: Positive Trust Anchors: Jul 15 23:17:07.213691 systemd-resolved[1383]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:17:07.213723 systemd-resolved[1383]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:17:07.221443 systemd-resolved[1383]: Defaulting to hostname 'linux'. Jul 15 23:17:07.223012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:17:07.224332 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:17:07.225486 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:17:07.225843 systemd-networkd[1464]: lo: Link UP Jul 15 23:17:07.226078 systemd-networkd[1464]: lo: Gained carrier Jul 15 23:17:07.226961 systemd-networkd[1464]: Enumeration completed Jul 15 23:17:07.227072 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:17:07.227584 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:17:07.227708 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:17:07.228255 systemd-networkd[1464]: eth0: Link UP Jul 15 23:17:07.228454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:17:07.228543 systemd-networkd[1464]: eth0: Gained carrier Jul 15 23:17:07.228616 systemd-networkd[1464]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:17:07.231447 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:17:07.232833 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:17:07.234078 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:17:07.235307 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:17:07.235339 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:17:07.236203 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:17:07.238391 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:17:07.240697 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:17:07.243863 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:17:07.245320 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:17:07.246557 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:17:07.255029 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:17:07.256673 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:17:07.258462 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:17:07.259800 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:17:07.262969 systemd[1]: Reached target network.target - Network. Jul 15 23:17:07.264037 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:17:07.265167 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:17:07.266281 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:17:07.266382 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:17:07.267477 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:17:07.270397 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:17:07.271296 systemd-networkd[1464]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:17:07.271924 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Jul 15 23:17:07.272760 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:17:07.272817 systemd-timesyncd[1432]: Initial clock synchronization to Tue 2025-07-15 23:17:07.277916 UTC. Jul 15 23:17:07.274450 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:17:07.285241 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:17:07.288836 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:17:07.289933 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:17:07.291183 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:17:07.294389 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:17:07.296479 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:17:07.298982 jq[1497]: false Jul 15 23:17:07.300611 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:17:07.307839 extend-filesystems[1498]: Found /dev/vda6 Jul 15 23:17:07.308746 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:17:07.311572 extend-filesystems[1498]: Found /dev/vda9 Jul 15 23:17:07.311749 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:17:07.315026 extend-filesystems[1498]: Checking size of /dev/vda9 Jul 15 23:17:07.315294 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:17:07.322140 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:17:07.326763 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:17:07.327510 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:17:07.329446 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:17:07.336880 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:17:07.338773 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:17:07.339960 jq[1522]: true Jul 15 23:17:07.340580 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:17:07.341004 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:17:07.341180 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:17:07.342737 extend-filesystems[1498]: Resized partition /dev/vda9 Jul 15 23:17:07.344172 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:17:07.344948 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:17:07.363160 extend-filesystems[1527]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:17:07.367538 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:17:07.374232 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:17:07.378535 jq[1528]: true Jul 15 23:17:07.377843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:17:07.399421 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:17:07.408339 update_engine[1521]: I20250715 23:17:07.407425 1521 main.cc:92] Flatcar Update Engine starting Jul 15 23:17:07.417456 tar[1526]: linux-arm64/LICENSE Jul 15 23:17:07.417737 tar[1526]: linux-arm64/helm Jul 15 23:17:07.425438 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:17:07.437937 dbus-daemon[1495]: [system] SELinux support is enabled Jul 15 23:17:07.438588 extend-filesystems[1527]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:17:07.438588 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:17:07.438588 extend-filesystems[1527]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:17:07.455798 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Jul 15 23:17:07.456679 update_engine[1521]: I20250715 23:17:07.450474 1521 update_check_scheduler.cc:74] Next update check in 5m13s Jul 15 23:17:07.438640 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:17:07.447604 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:17:07.451481 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:17:07.472493 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:17:07.474779 systemd-logind[1510]: New seat seat0. Jul 15 23:17:07.479199 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:17:07.493449 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:17:07.494839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:17:07.496595 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:17:07.501459 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:17:07.501789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:17:07.502092 dbus-daemon[1495]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 15 23:17:07.501845 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:17:07.503265 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:17:07.503300 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:17:07.504662 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:17:07.510499 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:17:07.577101 locksmithd[1568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:17:07.642446 containerd[1530]: time="2025-07-15T23:17:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:17:07.644956 containerd[1530]: time="2025-07-15T23:17:07.644910680Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:17:07.655245 containerd[1530]: time="2025-07-15T23:17:07.655115880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.8µs" Jul 15 23:17:07.655245 containerd[1530]: time="2025-07-15T23:17:07.655156400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:17:07.655245 containerd[1530]: time="2025-07-15T23:17:07.655175280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:17:07.655417 containerd[1530]: time="2025-07-15T23:17:07.655375680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:17:07.655417 containerd[1530]: time="2025-07-15T23:17:07.655394240Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:17:07.655665 containerd[1530]: time="2025-07-15T23:17:07.655419160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655665 containerd[1530]: time="2025-07-15T23:17:07.655477000Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655665 containerd[1530]: time="2025-07-15T23:17:07.655488160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655752 containerd[1530]: time="2025-07-15T23:17:07.655723040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655752 containerd[1530]: time="2025-07-15T23:17:07.655745360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655795 containerd[1530]: time="2025-07-15T23:17:07.655758320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655795 containerd[1530]: time="2025-07-15T23:17:07.655766360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:17:07.655873 containerd[1530]: time="2025-07-15T23:17:07.655830080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:17:07.656038 containerd[1530]: time="2025-07-15T23:17:07.656020520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:17:07.656070 containerd[1530]: time="2025-07-15T23:17:07.656053960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:17:07.656070 containerd[1530]: time="2025-07-15T23:17:07.656063480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:17:07.656625 containerd[1530]: time="2025-07-15T23:17:07.656597360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:17:07.656852 containerd[1530]: time="2025-07-15T23:17:07.656832200Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:17:07.656927 containerd[1530]: time="2025-07-15T23:17:07.656909640Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:17:07.662066 containerd[1530]: time="2025-07-15T23:17:07.662029040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662086440Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662105160Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662117600Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662132680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662147680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662159760Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662177200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662189360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662199560Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:17:07.662218 containerd[1530]: time="2025-07-15T23:17:07.662220280Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662236080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662372080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662393080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662406640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662417520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662427680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662437360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662448560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662458360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:17:07.662651 containerd[1530]: time="2025-07-15T23:17:07.662469560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:17:07.662967 containerd[1530]: time="2025-07-15T23:17:07.662881360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:17:07.662967 containerd[1530]: time="2025-07-15T23:17:07.662937960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:17:07.663157 containerd[1530]: time="2025-07-15T23:17:07.663138760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:17:07.663389 containerd[1530]: time="2025-07-15T23:17:07.663162240Z" level=info msg="Start snapshots syncer" Jul 15 23:17:07.663389 containerd[1530]: time="2025-07-15T23:17:07.663189480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:17:07.664154 containerd[1530]: time="2025-07-15T23:17:07.663920480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:17:07.664420 containerd[1530]: time="2025-07-15T23:17:07.664317480Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:17:07.664458 containerd[1530]: time="2025-07-15T23:17:07.664424680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:17:07.664582 containerd[1530]: time="2025-07-15T23:17:07.664556640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:17:07.664612 containerd[1530]: time="2025-07-15T23:17:07.664592240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:17:07.664612 containerd[1530]: time="2025-07-15T23:17:07.664608680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:17:07.664654 containerd[1530]: time="2025-07-15T23:17:07.664620160Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:17:07.664654 containerd[1530]: time="2025-07-15T23:17:07.664636400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:17:07.664692 containerd[1530]: time="2025-07-15T23:17:07.664652440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:17:07.664692 containerd[1530]: time="2025-07-15T23:17:07.664667680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664757920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664781000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664797920Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664837840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664857840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664872240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664886400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664895280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664908760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:17:07.664956 containerd[1530]: time="2025-07-15T23:17:07.664923440Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:17:07.665147 containerd[1530]: time="2025-07-15T23:17:07.665052880Z" level=info msg="runtime interface created" Jul 15 23:17:07.665147 containerd[1530]: time="2025-07-15T23:17:07.665061840Z" level=info msg="created NRI interface" Jul 15 23:17:07.665147 containerd[1530]: time="2025-07-15T23:17:07.665070360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:17:07.665147 containerd[1530]: time="2025-07-15T23:17:07.665087280Z" level=info msg="Connect containerd service" Jul 15 23:17:07.665147 containerd[1530]: time="2025-07-15T23:17:07.665119080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:17:07.666695 containerd[1530]: time="2025-07-15T23:17:07.666666720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:17:07.756273 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:17:07.776661 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:17:07.780384 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:17:07.785470 containerd[1530]: time="2025-07-15T23:17:07.785403400Z" level=info msg="Start subscribing containerd event" Jul 15 23:17:07.785540 containerd[1530]: time="2025-07-15T23:17:07.785488920Z" level=info msg="Start recovering state" Jul 15 23:17:07.785635 containerd[1530]: time="2025-07-15T23:17:07.785616640Z" level=info msg="Start event monitor" Jul 15 23:17:07.785685 containerd[1530]: time="2025-07-15T23:17:07.785643560Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:17:07.785705 containerd[1530]: time="2025-07-15T23:17:07.785667960Z" level=info msg="Start streaming server" Jul 15 23:17:07.785723 containerd[1530]: time="2025-07-15T23:17:07.785708560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:17:07.785723 containerd[1530]: time="2025-07-15T23:17:07.785717040Z" level=info msg="runtime interface starting up..." Jul 15 23:17:07.785755 containerd[1530]: time="2025-07-15T23:17:07.785723680Z" level=info msg="starting plugins..." Jul 15 23:17:07.785755 containerd[1530]: time="2025-07-15T23:17:07.785739440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:17:07.785925 containerd[1530]: time="2025-07-15T23:17:07.785900720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:17:07.785970 containerd[1530]: time="2025-07-15T23:17:07.785955320Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:17:07.786092 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:17:07.787843 containerd[1530]: time="2025-07-15T23:17:07.787813800Z" level=info msg="containerd successfully booted in 0.145869s" Jul 15 23:17:07.798132 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:17:07.798418 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:17:07.803358 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:17:07.824531 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:17:07.829646 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:17:07.832258 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:17:07.833812 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:17:07.846064 tar[1526]: linux-arm64/README.md Jul 15 23:17:07.879341 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:17:08.824384 systemd-networkd[1464]: eth0: Gained IPv6LL Jul 15 23:17:08.829726 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:17:08.831734 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:17:08.834617 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:17:08.837295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:17:08.848849 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:17:08.873893 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:17:08.874129 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:17:08.875882 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:17:08.878567 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:17:09.409638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:09.412432 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:17:09.414082 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:17:09.422323 systemd[1]: Startup finished in 2.115s (kernel) + 8.972s (initrd) + 3.878s (userspace) = 14.966s. Jul 15 23:17:09.890550 kubelet[1637]: E0715 23:17:09.890440 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:17:09.892718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:17:09.892856 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:17:09.893145 systemd[1]: kubelet.service: Consumed 805ms CPU time, 256.4M memory peak. Jul 15 23:17:10.050270 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:17:10.052343 systemd[1]: Started sshd@0-10.0.0.66:22-10.0.0.1:55520.service - OpenSSH per-connection server daemon (10.0.0.1:55520). Jul 15 23:17:10.139780 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 55520 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:10.138712 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:10.145387 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:17:10.149952 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:17:10.157415 systemd-logind[1510]: New session 1 of user core. Jul 15 23:17:10.174425 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:17:10.177076 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:17:10.191704 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:17:10.194043 systemd-logind[1510]: New session c1 of user core. Jul 15 23:17:10.320396 systemd[1654]: Queued start job for default target default.target. Jul 15 23:17:10.343319 systemd[1654]: Created slice app.slice - User Application Slice. Jul 15 23:17:10.343352 systemd[1654]: Reached target paths.target - Paths. Jul 15 23:17:10.343390 systemd[1654]: Reached target timers.target - Timers. Jul 15 23:17:10.344662 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:17:10.354857 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:17:10.355056 systemd[1654]: Reached target sockets.target - Sockets. Jul 15 23:17:10.355109 systemd[1654]: Reached target basic.target - Basic System. Jul 15 23:17:10.355138 systemd[1654]: Reached target default.target - Main User Target. Jul 15 23:17:10.355168 systemd[1654]: Startup finished in 154ms. Jul 15 23:17:10.355230 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:17:10.356531 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:17:10.414100 systemd[1]: Started sshd@1-10.0.0.66:22-10.0.0.1:55522.service - OpenSSH per-connection server daemon (10.0.0.1:55522). Jul 15 23:17:10.488402 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 55522 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:10.489740 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:10.494949 systemd-logind[1510]: New session 2 of user core. Jul 15 23:17:10.504396 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:17:10.560383 sshd[1667]: Connection closed by 10.0.0.1 port 55522 Jul 15 23:17:10.560633 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:10.571368 systemd[1]: sshd@1-10.0.0.66:22-10.0.0.1:55522.service: Deactivated successfully. Jul 15 23:17:10.572936 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:17:10.573595 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:17:10.575772 systemd[1]: Started sshd@2-10.0.0.66:22-10.0.0.1:55530.service - OpenSSH per-connection server daemon (10.0.0.1:55530). Jul 15 23:17:10.576703 systemd-logind[1510]: Removed session 2. Jul 15 23:17:10.633863 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 55530 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:10.635158 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:10.639280 systemd-logind[1510]: New session 3 of user core. Jul 15 23:17:10.647387 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:17:10.696037 sshd[1675]: Connection closed by 10.0.0.1 port 55530 Jul 15 23:17:10.695824 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:10.709311 systemd[1]: sshd@2-10.0.0.66:22-10.0.0.1:55530.service: Deactivated successfully. Jul 15 23:17:10.710767 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:17:10.712254 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:17:10.713557 systemd[1]: Started sshd@3-10.0.0.66:22-10.0.0.1:55536.service - OpenSSH per-connection server daemon (10.0.0.1:55536). Jul 15 23:17:10.714426 systemd-logind[1510]: Removed session 3. Jul 15 23:17:10.773906 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 55536 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:10.775236 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:10.779096 systemd-logind[1510]: New session 4 of user core. Jul 15 23:17:10.789423 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:17:10.841268 sshd[1683]: Connection closed by 10.0.0.1 port 55536 Jul 15 23:17:10.841641 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:10.859500 systemd[1]: sshd@3-10.0.0.66:22-10.0.0.1:55536.service: Deactivated successfully. Jul 15 23:17:10.861680 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:17:10.862588 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:17:10.865549 systemd[1]: Started sshd@4-10.0.0.66:22-10.0.0.1:55542.service - OpenSSH per-connection server daemon (10.0.0.1:55542). Jul 15 23:17:10.866603 systemd-logind[1510]: Removed session 4. Jul 15 23:17:10.921655 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 55542 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:10.922882 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:10.927287 systemd-logind[1510]: New session 5 of user core. Jul 15 23:17:10.938400 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:17:11.000360 sudo[1692]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:17:11.002482 sudo[1692]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:17:11.018042 sudo[1692]: pam_unix(sudo:session): session closed for user root Jul 15 23:17:11.020716 sshd[1691]: Connection closed by 10.0.0.1 port 55542 Jul 15 23:17:11.021385 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:11.032048 systemd[1]: sshd@4-10.0.0.66:22-10.0.0.1:55542.service: Deactivated successfully. Jul 15 23:17:11.033537 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:17:11.034569 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:17:11.036969 systemd-logind[1510]: Removed session 5. Jul 15 23:17:11.039231 systemd[1]: Started sshd@5-10.0.0.66:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Jul 15 23:17:11.100241 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:11.101915 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:11.106293 systemd-logind[1510]: New session 6 of user core. Jul 15 23:17:11.115394 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:17:11.166920 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:17:11.167230 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:17:11.172792 sudo[1702]: pam_unix(sudo:session): session closed for user root Jul 15 23:17:11.177795 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:17:11.178064 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:17:11.188105 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:17:11.239730 augenrules[1724]: No rules Jul 15 23:17:11.241050 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:17:11.241352 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:17:11.243423 sudo[1701]: pam_unix(sudo:session): session closed for user root Jul 15 23:17:11.245270 sshd[1700]: Connection closed by 10.0.0.1 port 55546 Jul 15 23:17:11.245347 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:11.258448 systemd[1]: sshd@5-10.0.0.66:22-10.0.0.1:55546.service: Deactivated successfully. Jul 15 23:17:11.260679 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:17:11.263517 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:17:11.265729 systemd[1]: Started sshd@6-10.0.0.66:22-10.0.0.1:55556.service - OpenSSH per-connection server daemon (10.0.0.1:55556). Jul 15 23:17:11.266684 systemd-logind[1510]: Removed session 6. Jul 15 23:17:11.325001 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 55556 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:17:11.326439 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:17:11.331187 systemd-logind[1510]: New session 7 of user core. Jul 15 23:17:11.350421 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:17:11.401596 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:17:11.401872 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:17:11.811225 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:17:11.835655 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:17:12.112391 dockerd[1757]: time="2025-07-15T23:17:12.112254307Z" level=info msg="Starting up" Jul 15 23:17:12.113592 dockerd[1757]: time="2025-07-15T23:17:12.113549293Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:17:12.438102 dockerd[1757]: time="2025-07-15T23:17:12.437983844Z" level=info msg="Loading containers: start." Jul 15 23:17:12.450250 kernel: Initializing XFRM netlink socket Jul 15 23:17:12.661217 systemd-networkd[1464]: docker0: Link UP Jul 15 23:17:12.665605 dockerd[1757]: time="2025-07-15T23:17:12.665560850Z" level=info msg="Loading containers: done." Jul 15 23:17:12.681422 dockerd[1757]: time="2025-07-15T23:17:12.681359644Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:17:12.681587 dockerd[1757]: time="2025-07-15T23:17:12.681455258Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:17:12.681587 dockerd[1757]: time="2025-07-15T23:17:12.681569914Z" level=info msg="Initializing buildkit" Jul 15 23:17:12.707842 dockerd[1757]: time="2025-07-15T23:17:12.707730321Z" level=info msg="Completed buildkit initialization" Jul 15 23:17:12.715083 dockerd[1757]: time="2025-07-15T23:17:12.715023851Z" level=info msg="Daemon has completed initialization" Jul 15 23:17:12.715263 dockerd[1757]: time="2025-07-15T23:17:12.715230321Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:17:12.715344 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:17:13.330971 containerd[1530]: time="2025-07-15T23:17:13.330925603Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Jul 15 23:17:14.036351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185982764.mount: Deactivated successfully. Jul 15 23:17:14.814518 containerd[1530]: time="2025-07-15T23:17:14.814379047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:14.815388 containerd[1530]: time="2025-07-15T23:17:14.815136030Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352096" Jul 15 23:17:14.816126 containerd[1530]: time="2025-07-15T23:17:14.816088998Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:14.820339 containerd[1530]: time="2025-07-15T23:17:14.820290166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:14.821497 containerd[1530]: time="2025-07-15T23:17:14.821445922Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 1.490476433s" Jul 15 23:17:14.821497 containerd[1530]: time="2025-07-15T23:17:14.821486288Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Jul 15 23:17:14.824881 containerd[1530]: time="2025-07-15T23:17:14.824842581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Jul 15 23:17:15.761632 containerd[1530]: time="2025-07-15T23:17:15.761578898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:15.762371 containerd[1530]: time="2025-07-15T23:17:15.762342878Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537848" Jul 15 23:17:15.763028 containerd[1530]: time="2025-07-15T23:17:15.762994803Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:15.765663 containerd[1530]: time="2025-07-15T23:17:15.765598704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:15.766546 containerd[1530]: time="2025-07-15T23:17:15.766505623Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 941.619395ms" Jul 15 23:17:15.766546 containerd[1530]: time="2025-07-15T23:17:15.766544628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Jul 15 23:17:15.767569 containerd[1530]: time="2025-07-15T23:17:15.767369776Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Jul 15 23:17:16.805244 containerd[1530]: time="2025-07-15T23:17:16.805164888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:16.805804 containerd[1530]: time="2025-07-15T23:17:16.805774286Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293526" Jul 15 23:17:16.806872 containerd[1530]: time="2025-07-15T23:17:16.806835860Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:16.809302 containerd[1530]: time="2025-07-15T23:17:16.809272009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:16.810366 containerd[1530]: time="2025-07-15T23:17:16.810267295Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.042860435s" Jul 15 23:17:16.810366 containerd[1530]: time="2025-07-15T23:17:16.810306660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Jul 15 23:17:16.811129 containerd[1530]: time="2025-07-15T23:17:16.811061436Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Jul 15 23:17:17.916325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809316593.mount: Deactivated successfully. Jul 15 23:17:18.351836 containerd[1530]: time="2025-07-15T23:17:18.351777962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:18.354329 containerd[1530]: time="2025-07-15T23:17:18.354284541Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199474" Jul 15 23:17:18.359491 containerd[1530]: time="2025-07-15T23:17:18.359432233Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:18.361761 containerd[1530]: time="2025-07-15T23:17:18.361703184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:18.362669 containerd[1530]: time="2025-07-15T23:17:18.362628294Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 1.551530053s" Jul 15 23:17:18.362719 containerd[1530]: time="2025-07-15T23:17:18.362668298Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Jul 15 23:17:18.363235 containerd[1530]: time="2025-07-15T23:17:18.363172318Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 15 23:17:18.931676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718693546.mount: Deactivated successfully. Jul 15 23:17:19.682987 containerd[1530]: time="2025-07-15T23:17:19.682913761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:19.696103 containerd[1530]: time="2025-07-15T23:17:19.696048196Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 15 23:17:19.702379 containerd[1530]: time="2025-07-15T23:17:19.702345242Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:19.705727 containerd[1530]: time="2025-07-15T23:17:19.705666865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:19.707668 containerd[1530]: time="2025-07-15T23:17:19.707624770Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.344418568s" Jul 15 23:17:19.707712 containerd[1530]: time="2025-07-15T23:17:19.707670056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 15 23:17:19.708293 containerd[1530]: time="2025-07-15T23:17:19.708267284Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:17:19.951356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:17:19.952789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:17:20.100289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:20.104576 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:17:20.262885 kubelet[2102]: E0715 23:17:20.262756 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:17:20.266078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:17:20.266233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:17:20.266533 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.5M memory peak. Jul 15 23:17:20.391140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873967534.mount: Deactivated successfully. Jul 15 23:17:20.395933 containerd[1530]: time="2025-07-15T23:17:20.395579743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:17:20.396400 containerd[1530]: time="2025-07-15T23:17:20.396369591Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 23:17:20.397364 containerd[1530]: time="2025-07-15T23:17:20.397333059Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:17:20.399112 containerd[1530]: time="2025-07-15T23:17:20.399061532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:17:20.399901 containerd[1530]: time="2025-07-15T23:17:20.399811415Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 691.512367ms" Jul 15 23:17:20.399901 containerd[1530]: time="2025-07-15T23:17:20.399848139Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:17:20.400422 containerd[1530]: time="2025-07-15T23:17:20.400374478Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 15 23:17:20.851775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673656035.mount: Deactivated successfully. Jul 15 23:17:22.244923 containerd[1530]: time="2025-07-15T23:17:22.244850940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:22.245464 containerd[1530]: time="2025-07-15T23:17:22.245433521Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 15 23:17:22.246298 containerd[1530]: time="2025-07-15T23:17:22.246273209Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:22.249372 containerd[1530]: time="2025-07-15T23:17:22.249297286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:22.250313 containerd[1530]: time="2025-07-15T23:17:22.250277629Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.849722691s" Jul 15 23:17:22.250352 containerd[1530]: time="2025-07-15T23:17:22.250320513Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 15 23:17:29.925110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:29.925256 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.5M memory peak. Jul 15 23:17:29.927129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:17:29.947117 systemd[1]: Reload requested from client PID 2199 ('systemctl') (unit session-7.scope)... Jul 15 23:17:29.947133 systemd[1]: Reloading... Jul 15 23:17:30.013199 zram_generator::config[2242]: No configuration found. Jul 15 23:17:30.123669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:17:30.219954 systemd[1]: Reloading finished in 272 ms. Jul 15 23:17:30.274638 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:17:30.274714 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:17:30.274968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:30.275014 systemd[1]: kubelet.service: Consumed 85ms CPU time, 94.9M memory peak. Jul 15 23:17:30.276456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:17:30.389590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:30.404557 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:17:30.443846 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:17:30.443846 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:17:30.443846 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:17:30.444191 kubelet[2288]: I0715 23:17:30.443889 2288 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:17:31.434894 kubelet[2288]: I0715 23:17:31.434843 2288 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:17:31.434894 kubelet[2288]: I0715 23:17:31.434879 2288 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:17:31.435125 kubelet[2288]: I0715 23:17:31.435092 2288 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:17:31.489740 kubelet[2288]: E0715 23:17:31.489523 2288 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 15 23:17:31.490526 kubelet[2288]: I0715 23:17:31.490504 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:17:31.500492 kubelet[2288]: I0715 23:17:31.500448 2288 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:17:31.503695 kubelet[2288]: I0715 23:17:31.503667 2288 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:17:31.504762 kubelet[2288]: I0715 23:17:31.504704 2288 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:17:31.504913 kubelet[2288]: I0715 23:17:31.504755 2288 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:17:31.505008 kubelet[2288]: I0715 23:17:31.504980 2288 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:17:31.505008 kubelet[2288]: I0715 23:17:31.504990 2288 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:17:31.505761 kubelet[2288]: I0715 23:17:31.505731 2288 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:17:31.510029 kubelet[2288]: I0715 23:17:31.510000 2288 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:17:31.510029 kubelet[2288]: I0715 23:17:31.510030 2288 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:17:31.510101 kubelet[2288]: I0715 23:17:31.510057 2288 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:17:31.511195 kubelet[2288]: I0715 23:17:31.511031 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:17:31.512476 kubelet[2288]: I0715 23:17:31.512090 2288 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:17:31.512476 kubelet[2288]: E0715 23:17:31.512416 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:17:31.512695 kubelet[2288]: E0715 23:17:31.512669 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:17:31.512892 kubelet[2288]: I0715 23:17:31.512859 2288 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:17:31.513016 kubelet[2288]: W0715 23:17:31.512995 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:17:31.515343 kubelet[2288]: I0715 23:17:31.515323 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:17:31.515412 kubelet[2288]: I0715 23:17:31.515369 2288 server.go:1289] "Started kubelet" Jul 15 23:17:31.518582 kubelet[2288]: I0715 23:17:31.517581 2288 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:17:31.519240 kubelet[2288]: I0715 23:17:31.519186 2288 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:17:31.521841 kubelet[2288]: I0715 23:17:31.521802 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:17:31.524094 kubelet[2288]: E0715 23:17:31.522938 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.66:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.66:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18528feca34d226a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:17:31.51533937 +0000 UTC m=+1.106812207,LastTimestamp:2025-07-15 23:17:31.51533937 +0000 UTC m=+1.106812207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:17:31.524461 kubelet[2288]: I0715 23:17:31.524441 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:17:31.524643 kubelet[2288]: I0715 23:17:31.524619 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:17:31.524697 kubelet[2288]: E0715 23:17:31.524673 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:31.525493 kubelet[2288]: I0715 23:17:31.525472 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:17:31.525564 kubelet[2288]: I0715 23:17:31.525550 2288 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:17:31.526003 kubelet[2288]: I0715 23:17:31.524897 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:17:31.526201 kubelet[2288]: I0715 23:17:31.526176 2288 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:17:31.526369 kubelet[2288]: E0715 23:17:31.526332 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:17:31.526582 kubelet[2288]: E0715 23:17:31.526547 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="200ms" Jul 15 23:17:31.526821 kubelet[2288]: I0715 23:17:31.526798 2288 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:17:31.526928 kubelet[2288]: I0715 23:17:31.526906 2288 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:17:31.527055 kubelet[2288]: E0715 23:17:31.527034 2288 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:17:31.527897 kubelet[2288]: I0715 23:17:31.527877 2288 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:17:31.539837 kubelet[2288]: I0715 23:17:31.539784 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:17:31.539837 kubelet[2288]: I0715 23:17:31.539801 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:17:31.539837 kubelet[2288]: I0715 23:17:31.539820 2288 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:17:31.540737 kubelet[2288]: I0715 23:17:31.540587 2288 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:17:31.541993 kubelet[2288]: I0715 23:17:31.541722 2288 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:17:31.541993 kubelet[2288]: I0715 23:17:31.541742 2288 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:17:31.541993 kubelet[2288]: I0715 23:17:31.541763 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:17:31.541993 kubelet[2288]: I0715 23:17:31.541769 2288 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:17:31.541993 kubelet[2288]: E0715 23:17:31.541808 2288 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:17:31.542738 kubelet[2288]: E0715 23:17:31.542706 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:17:31.625169 kubelet[2288]: E0715 23:17:31.625122 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:31.642446 kubelet[2288]: E0715 23:17:31.642401 2288 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:17:31.687797 kubelet[2288]: I0715 23:17:31.687707 2288 policy_none.go:49] "None policy: Start" Jul 15 23:17:31.687797 kubelet[2288]: I0715 23:17:31.687743 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:17:31.687797 kubelet[2288]: I0715 23:17:31.687757 2288 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:17:31.713274 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:17:31.724313 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:17:31.725257 kubelet[2288]: E0715 23:17:31.725219 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:31.727306 kubelet[2288]: E0715 23:17:31.727274 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="400ms" Jul 15 23:17:31.727898 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:17:31.744371 kubelet[2288]: E0715 23:17:31.744331 2288 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:17:31.744573 kubelet[2288]: I0715 23:17:31.744559 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:17:31.744632 kubelet[2288]: I0715 23:17:31.744573 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:17:31.745678 kubelet[2288]: I0715 23:17:31.745587 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:17:31.746135 kubelet[2288]: E0715 23:17:31.746064 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:17:31.746190 kubelet[2288]: E0715 23:17:31.746147 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:17:31.845976 kubelet[2288]: I0715 23:17:31.845939 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:17:31.846644 kubelet[2288]: E0715 23:17:31.846564 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jul 15 23:17:31.853787 systemd[1]: Created slice kubepods-burstable-pod80c179bb01e2edea41f80b3e3cca9143.slice - libcontainer container kubepods-burstable-pod80c179bb01e2edea41f80b3e3cca9143.slice. Jul 15 23:17:31.884737 kubelet[2288]: E0715 23:17:31.884621 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:31.888082 systemd[1]: Created slice kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice - libcontainer container kubepods-burstable-podee495458985854145bfdfbfdfe0cc6b2.slice. Jul 15 23:17:31.890059 kubelet[2288]: E0715 23:17:31.889964 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:31.891466 systemd[1]: Created slice kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice - libcontainer container kubepods-burstable-pod9f30683e4d57ebf2ca7dbf4704079d65.slice. Jul 15 23:17:31.895747 kubelet[2288]: E0715 23:17:31.895701 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:31.928740 kubelet[2288]: I0715 23:17:31.928638 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:31.928740 kubelet[2288]: I0715 23:17:31.928689 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:31.928740 kubelet[2288]: I0715 23:17:31.928711 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:31.928912 kubelet[2288]: I0715 23:17:31.928752 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80c179bb01e2edea41f80b3e3cca9143-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80c179bb01e2edea41f80b3e3cca9143\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:31.928912 kubelet[2288]: I0715 23:17:31.928806 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80c179bb01e2edea41f80b3e3cca9143-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80c179bb01e2edea41f80b3e3cca9143\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:31.928912 kubelet[2288]: I0715 23:17:31.928854 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80c179bb01e2edea41f80b3e3cca9143-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80c179bb01e2edea41f80b3e3cca9143\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:31.928912 kubelet[2288]: I0715 23:17:31.928893 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:31.928912 kubelet[2288]: I0715 23:17:31.928912 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:31.929031 kubelet[2288]: I0715 23:17:31.928944 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:32.050321 kubelet[2288]: I0715 23:17:32.050269 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:17:32.050650 kubelet[2288]: E0715 23:17:32.050615 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jul 15 23:17:32.128228 kubelet[2288]: E0715 23:17:32.128155 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="800ms" Jul 15 23:17:32.185721 kubelet[2288]: E0715 23:17:32.185670 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.186467 containerd[1530]: time="2025-07-15T23:17:32.186419860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80c179bb01e2edea41f80b3e3cca9143,Namespace:kube-system,Attempt:0,}" Jul 15 23:17:32.190606 kubelet[2288]: E0715 23:17:32.190579 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.191059 containerd[1530]: time="2025-07-15T23:17:32.190931284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,}" Jul 15 23:17:32.196233 kubelet[2288]: E0715 23:17:32.196197 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.196716 containerd[1530]: time="2025-07-15T23:17:32.196629999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,}" Jul 15 23:17:32.303863 containerd[1530]: time="2025-07-15T23:17:32.303716410Z" level=info msg="connecting to shim aacf1c5165edda5bcea117923ddd6a05b6b9002e5e9115d8da79d4de5874d3be" address="unix:///run/containerd/s/bc0c39ec2c8bd54b76876c00221135276f6368f2154571f03722b31dc9fa90b9" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:17:32.305299 containerd[1530]: time="2025-07-15T23:17:32.305189882Z" level=info msg="connecting to shim 6a07dab349957f8cae204122394255f3c2ddc28b9f047e2987e1b7e94c51d312" address="unix:///run/containerd/s/ea2983cfef858c475c848f47cfd88967a86a3d407b4bff4b832873358f2d6fa0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:17:32.307861 containerd[1530]: time="2025-07-15T23:17:32.307824203Z" level=info msg="connecting to shim 1e5c872b919e7cff838e28991b9ecf441dedc6d5550b6c4d884fc726deaeb0b6" address="unix:///run/containerd/s/7d4e2d18dc45590bddc132b496787b9e5e9fa81cc10405dde03fd2064ee83e0a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:17:32.332408 systemd[1]: Started cri-containerd-6a07dab349957f8cae204122394255f3c2ddc28b9f047e2987e1b7e94c51d312.scope - libcontainer container 6a07dab349957f8cae204122394255f3c2ddc28b9f047e2987e1b7e94c51d312. Jul 15 23:17:32.344981 systemd[1]: Started cri-containerd-aacf1c5165edda5bcea117923ddd6a05b6b9002e5e9115d8da79d4de5874d3be.scope - libcontainer container aacf1c5165edda5bcea117923ddd6a05b6b9002e5e9115d8da79d4de5874d3be. Jul 15 23:17:32.348953 systemd[1]: Started cri-containerd-1e5c872b919e7cff838e28991b9ecf441dedc6d5550b6c4d884fc726deaeb0b6.scope - libcontainer container 1e5c872b919e7cff838e28991b9ecf441dedc6d5550b6c4d884fc726deaeb0b6. Jul 15 23:17:32.381905 containerd[1530]: time="2025-07-15T23:17:32.381800288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ee495458985854145bfdfbfdfe0cc6b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a07dab349957f8cae204122394255f3c2ddc28b9f047e2987e1b7e94c51d312\"" Jul 15 23:17:32.383188 kubelet[2288]: E0715 23:17:32.383046 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.388803 containerd[1530]: time="2025-07-15T23:17:32.388422833Z" level=info msg="CreateContainer within sandbox \"6a07dab349957f8cae204122394255f3c2ddc28b9f047e2987e1b7e94c51d312\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:17:32.388963 containerd[1530]: time="2025-07-15T23:17:32.388941073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:80c179bb01e2edea41f80b3e3cca9143,Namespace:kube-system,Attempt:0,} returns sandbox id \"aacf1c5165edda5bcea117923ddd6a05b6b9002e5e9115d8da79d4de5874d3be\"" Jul 15 23:17:32.396657 kubelet[2288]: E0715 23:17:32.396624 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.399067 containerd[1530]: time="2025-07-15T23:17:32.399016442Z" level=info msg="Container 9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:32.400158 containerd[1530]: time="2025-07-15T23:17:32.400109445Z" level=info msg="CreateContainer within sandbox \"aacf1c5165edda5bcea117923ddd6a05b6b9002e5e9115d8da79d4de5874d3be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:17:32.405247 containerd[1530]: time="2025-07-15T23:17:32.405198674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9f30683e4d57ebf2ca7dbf4704079d65,Namespace:kube-system,Attempt:0,} returns sandbox id \"1e5c872b919e7cff838e28991b9ecf441dedc6d5550b6c4d884fc726deaeb0b6\"" Jul 15 23:17:32.406245 kubelet[2288]: E0715 23:17:32.406195 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 15 23:17:32.406395 kubelet[2288]: E0715 23:17:32.406233 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.407710 containerd[1530]: time="2025-07-15T23:17:32.407513490Z" level=info msg="CreateContainer within sandbox \"6a07dab349957f8cae204122394255f3c2ddc28b9f047e2987e1b7e94c51d312\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e\"" Jul 15 23:17:32.408311 containerd[1530]: time="2025-07-15T23:17:32.408282509Z" level=info msg="StartContainer for \"9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e\"" Jul 15 23:17:32.409477 containerd[1530]: time="2025-07-15T23:17:32.409449118Z" level=info msg="connecting to shim 9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e" address="unix:///run/containerd/s/ea2983cfef858c475c848f47cfd88967a86a3d407b4bff4b832873358f2d6fa0" protocol=ttrpc version=3 Jul 15 23:17:32.410482 containerd[1530]: time="2025-07-15T23:17:32.410453915Z" level=info msg="CreateContainer within sandbox \"1e5c872b919e7cff838e28991b9ecf441dedc6d5550b6c4d884fc726deaeb0b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:17:32.411236 containerd[1530]: time="2025-07-15T23:17:32.411191731Z" level=info msg="Container b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:32.419354 containerd[1530]: time="2025-07-15T23:17:32.419292029Z" level=info msg="Container cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:32.420453 containerd[1530]: time="2025-07-15T23:17:32.420346189Z" level=info msg="CreateContainer within sandbox \"aacf1c5165edda5bcea117923ddd6a05b6b9002e5e9115d8da79d4de5874d3be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e\"" Jul 15 23:17:32.421005 containerd[1530]: time="2025-07-15T23:17:32.420986878Z" level=info msg="StartContainer for \"b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e\"" Jul 15 23:17:32.422568 containerd[1530]: time="2025-07-15T23:17:32.422454910Z" level=info msg="connecting to shim b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e" address="unix:///run/containerd/s/bc0c39ec2c8bd54b76876c00221135276f6368f2154571f03722b31dc9fa90b9" protocol=ttrpc version=3 Jul 15 23:17:32.428774 containerd[1530]: time="2025-07-15T23:17:32.428731349Z" level=info msg="CreateContainer within sandbox \"1e5c872b919e7cff838e28991b9ecf441dedc6d5550b6c4d884fc726deaeb0b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876\"" Jul 15 23:17:32.429119 containerd[1530]: time="2025-07-15T23:17:32.429083776Z" level=info msg="StartContainer for \"cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876\"" Jul 15 23:17:32.430687 containerd[1530]: time="2025-07-15T23:17:32.430658016Z" level=info msg="connecting to shim cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876" address="unix:///run/containerd/s/7d4e2d18dc45590bddc132b496787b9e5e9fa81cc10405dde03fd2064ee83e0a" protocol=ttrpc version=3 Jul 15 23:17:32.432432 systemd[1]: Started cri-containerd-9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e.scope - libcontainer container 9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e. Jul 15 23:17:32.444394 systemd[1]: Started cri-containerd-b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e.scope - libcontainer container b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e. Jul 15 23:17:32.447972 systemd[1]: Started cri-containerd-cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876.scope - libcontainer container cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876. Jul 15 23:17:32.453546 kubelet[2288]: I0715 23:17:32.453475 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:17:32.454573 kubelet[2288]: E0715 23:17:32.454542 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Jul 15 23:17:32.502069 containerd[1530]: time="2025-07-15T23:17:32.496697255Z" level=info msg="StartContainer for \"9afd2c75160f474b5b3546d11c25afa1be1f2ff353ab1eded2f0a96f70a8b47e\" returns successfully" Jul 15 23:17:32.506841 containerd[1530]: time="2025-07-15T23:17:32.503830640Z" level=info msg="StartContainer for \"b0cac18bb3b310fa896ef224f64f1b3ffa4074730cfd272cf31493f632e1ac4e\" returns successfully" Jul 15 23:17:32.506841 containerd[1530]: time="2025-07-15T23:17:32.504341679Z" level=info msg="StartContainer for \"cdff9174d4b4f69188f9454ee65d1bdaea87e94cb2db6644ab797df89cdad876\" returns successfully" Jul 15 23:17:32.550916 kubelet[2288]: E0715 23:17:32.548446 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:32.550916 kubelet[2288]: E0715 23:17:32.548582 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.551779 kubelet[2288]: E0715 23:17:32.551523 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:32.551779 kubelet[2288]: E0715 23:17:32.551640 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.558081 kubelet[2288]: E0715 23:17:32.553885 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:32.558081 kubelet[2288]: E0715 23:17:32.554008 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:32.607690 kubelet[2288]: E0715 23:17:32.607638 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 15 23:17:32.616266 kubelet[2288]: E0715 23:17:32.616161 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 15 23:17:32.650459 kubelet[2288]: E0715 23:17:32.650411 2288 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 15 23:17:33.257016 kubelet[2288]: I0715 23:17:33.256978 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:17:33.557021 kubelet[2288]: E0715 23:17:33.556923 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:33.559200 kubelet[2288]: E0715 23:17:33.557201 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:33.559563 kubelet[2288]: E0715 23:17:33.559535 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:33.559661 kubelet[2288]: E0715 23:17:33.559647 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:34.022477 kubelet[2288]: E0715 23:17:34.022436 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 23:17:34.079418 kubelet[2288]: I0715 23:17:34.079219 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:17:34.079418 kubelet[2288]: E0715 23:17:34.079256 2288 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:17:34.088757 kubelet[2288]: E0715 23:17:34.088725 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.190823 kubelet[2288]: E0715 23:17:34.190775 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.291468 kubelet[2288]: E0715 23:17:34.291362 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.392002 kubelet[2288]: E0715 23:17:34.391965 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.492694 kubelet[2288]: E0715 23:17:34.492648 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.558384 kubelet[2288]: E0715 23:17:34.558066 2288 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:17:34.558384 kubelet[2288]: E0715 23:17:34.558224 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:34.593342 kubelet[2288]: E0715 23:17:34.593086 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.693872 kubelet[2288]: E0715 23:17:34.693829 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.794518 kubelet[2288]: E0715 23:17:34.794482 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.895255 kubelet[2288]: E0715 23:17:34.895101 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:34.995511 kubelet[2288]: E0715 23:17:34.995446 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:35.096403 kubelet[2288]: E0715 23:17:35.096347 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:35.197284 kubelet[2288]: E0715 23:17:35.197153 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:35.298154 kubelet[2288]: E0715 23:17:35.298097 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:35.398890 kubelet[2288]: E0715 23:17:35.398852 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:35.499943 kubelet[2288]: E0715 23:17:35.499825 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:17:35.622673 kubelet[2288]: I0715 23:17:35.622641 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:35.625590 kubelet[2288]: I0715 23:17:35.625518 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:35.634596 kubelet[2288]: I0715 23:17:35.634549 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:35.634983 kubelet[2288]: E0715 23:17:35.634958 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:35.639200 kubelet[2288]: E0715 23:17:35.639158 2288 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:35.639200 kubelet[2288]: I0715 23:17:35.639188 2288 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:36.349349 systemd[1]: Reload requested from client PID 2571 ('systemctl') (unit session-7.scope)... Jul 15 23:17:36.349363 systemd[1]: Reloading... Jul 15 23:17:36.414272 zram_generator::config[2617]: No configuration found. Jul 15 23:17:36.480425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:17:36.512279 kubelet[2288]: I0715 23:17:36.512236 2288 apiserver.go:52] "Watching apiserver" Jul 15 23:17:36.515145 kubelet[2288]: E0715 23:17:36.514779 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:36.515770 kubelet[2288]: E0715 23:17:36.515717 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:36.525828 kubelet[2288]: I0715 23:17:36.525805 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:17:36.560200 kubelet[2288]: E0715 23:17:36.560160 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:36.579886 systemd[1]: Reloading finished in 230 ms. Jul 15 23:17:36.610455 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:17:36.624749 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:17:36.625000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:36.625061 systemd[1]: kubelet.service: Consumed 1.526s CPU time, 129.8M memory peak. Jul 15 23:17:36.626856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:17:36.777247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:17:36.782200 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:17:36.824801 kubelet[2656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:17:36.824801 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:17:36.824801 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:17:36.825502 kubelet[2656]: I0715 23:17:36.824832 2656 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:17:36.833348 kubelet[2656]: I0715 23:17:36.833310 2656 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 15 23:17:36.833514 kubelet[2656]: I0715 23:17:36.833503 2656 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:17:36.833898 kubelet[2656]: I0715 23:17:36.833882 2656 server.go:956] "Client rotation is on, will bootstrap in background" Jul 15 23:17:36.835811 kubelet[2656]: I0715 23:17:36.835749 2656 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 15 23:17:36.838553 kubelet[2656]: I0715 23:17:36.838435 2656 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:17:36.842779 kubelet[2656]: I0715 23:17:36.842749 2656 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:17:36.845862 kubelet[2656]: I0715 23:17:36.845808 2656 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:17:36.846098 kubelet[2656]: I0715 23:17:36.846066 2656 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:17:36.846296 kubelet[2656]: I0715 23:17:36.846094 2656 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:17:36.846390 kubelet[2656]: I0715 23:17:36.846303 2656 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:17:36.846390 kubelet[2656]: I0715 23:17:36.846313 2656 container_manager_linux.go:303] "Creating device plugin manager" Jul 15 23:17:36.846390 kubelet[2656]: I0715 23:17:36.846368 2656 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:17:36.846546 kubelet[2656]: I0715 23:17:36.846530 2656 kubelet.go:480] "Attempting to sync node with API server" Jul 15 23:17:36.846585 kubelet[2656]: I0715 23:17:36.846549 2656 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:17:36.847099 kubelet[2656]: I0715 23:17:36.847082 2656 kubelet.go:386] "Adding apiserver pod source" Jul 15 23:17:36.847250 kubelet[2656]: I0715 23:17:36.847108 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:17:36.848677 kubelet[2656]: I0715 23:17:36.848654 2656 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:17:36.849218 kubelet[2656]: I0715 23:17:36.849191 2656 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 15 23:17:36.852088 kubelet[2656]: I0715 23:17:36.852047 2656 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:17:36.852166 kubelet[2656]: I0715 23:17:36.852094 2656 server.go:1289] "Started kubelet" Jul 15 23:17:36.854790 kubelet[2656]: I0715 23:17:36.853438 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:17:36.857829 kubelet[2656]: I0715 23:17:36.857790 2656 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:17:36.858860 kubelet[2656]: I0715 23:17:36.858834 2656 server.go:317] "Adding debug handlers to kubelet server" Jul 15 23:17:36.861901 kubelet[2656]: I0715 23:17:36.861769 2656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:17:36.862317 kubelet[2656]: I0715 23:17:36.862293 2656 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:17:36.862766 kubelet[2656]: I0715 23:17:36.862674 2656 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:17:36.874240 kubelet[2656]: I0715 23:17:36.872270 2656 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:17:36.875236 kubelet[2656]: E0715 23:17:36.874522 2656 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:17:36.875236 kubelet[2656]: I0715 23:17:36.874639 2656 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:17:36.875380 kubelet[2656]: I0715 23:17:36.874097 2656 factory.go:223] Registration of the systemd container factory successfully Jul 15 23:17:36.877287 kubelet[2656]: I0715 23:17:36.875543 2656 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:17:36.877896 kubelet[2656]: I0715 23:17:36.877874 2656 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:17:36.882235 kubelet[2656]: I0715 23:17:36.881931 2656 factory.go:223] Registration of the containerd container factory successfully Jul 15 23:17:36.900722 kubelet[2656]: I0715 23:17:36.900686 2656 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 15 23:17:36.902661 kubelet[2656]: I0715 23:17:36.902634 2656 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 15 23:17:36.902661 kubelet[2656]: I0715 23:17:36.902657 2656 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 15 23:17:36.902850 kubelet[2656]: I0715 23:17:36.902689 2656 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:17:36.902850 kubelet[2656]: I0715 23:17:36.902695 2656 kubelet.go:2436] "Starting kubelet main sync loop" Jul 15 23:17:36.902850 kubelet[2656]: E0715 23:17:36.902772 2656 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:17:36.928421 kubelet[2656]: I0715 23:17:36.928396 2656 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:17:36.928641 kubelet[2656]: I0715 23:17:36.928624 2656 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:17:36.928708 kubelet[2656]: I0715 23:17:36.928699 2656 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:17:36.928891 kubelet[2656]: I0715 23:17:36.928867 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:17:36.928974 kubelet[2656]: I0715 23:17:36.928950 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:17:36.929027 kubelet[2656]: I0715 23:17:36.929018 2656 policy_none.go:49] "None policy: Start" Jul 15 23:17:36.929079 kubelet[2656]: I0715 23:17:36.929070 2656 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:17:36.929133 kubelet[2656]: I0715 23:17:36.929124 2656 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:17:36.929314 kubelet[2656]: I0715 23:17:36.929298 2656 state_mem.go:75] "Updated machine memory state" Jul 15 23:17:36.933201 kubelet[2656]: E0715 23:17:36.933170 2656 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 15 23:17:36.933557 kubelet[2656]: I0715 23:17:36.933361 2656 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:17:36.933557 kubelet[2656]: I0715 23:17:36.933380 2656 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:17:36.933557 kubelet[2656]: I0715 23:17:36.933555 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:17:36.935575 kubelet[2656]: E0715 23:17:36.935549 2656 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:17:37.003563 kubelet[2656]: I0715 23:17:37.003530 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.003706 kubelet[2656]: I0715 23:17:37.003677 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:37.003759 kubelet[2656]: I0715 23:17:37.003530 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:37.009132 kubelet[2656]: E0715 23:17:37.009071 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.015936 kubelet[2656]: E0715 23:17:37.015899 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:37.016057 kubelet[2656]: E0715 23:17:37.015899 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:37.040195 kubelet[2656]: I0715 23:17:37.040172 2656 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:17:37.046477 kubelet[2656]: I0715 23:17:37.046445 2656 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 23:17:37.046579 kubelet[2656]: I0715 23:17:37.046532 2656 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:17:37.079436 kubelet[2656]: I0715 23:17:37.079387 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.079574 kubelet[2656]: I0715 23:17:37.079442 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.079574 kubelet[2656]: I0715 23:17:37.079467 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.079574 kubelet[2656]: I0715 23:17:37.079485 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.079574 kubelet[2656]: I0715 23:17:37.079513 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f30683e4d57ebf2ca7dbf4704079d65-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9f30683e4d57ebf2ca7dbf4704079d65\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:37.079574 kubelet[2656]: I0715 23:17:37.079529 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee495458985854145bfdfbfdfe0cc6b2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ee495458985854145bfdfbfdfe0cc6b2\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.079678 kubelet[2656]: I0715 23:17:37.079546 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80c179bb01e2edea41f80b3e3cca9143-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"80c179bb01e2edea41f80b3e3cca9143\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:37.079678 kubelet[2656]: I0715 23:17:37.079562 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80c179bb01e2edea41f80b3e3cca9143-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"80c179bb01e2edea41f80b3e3cca9143\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:37.079678 kubelet[2656]: I0715 23:17:37.079596 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80c179bb01e2edea41f80b3e3cca9143-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"80c179bb01e2edea41f80b3e3cca9143\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:17:37.309924 kubelet[2656]: E0715 23:17:37.309883 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:37.316441 kubelet[2656]: E0715 23:17:37.316400 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:37.316441 kubelet[2656]: E0715 23:17:37.316429 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:37.355757 sudo[2696]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:17:37.356008 sudo[2696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:17:37.810684 sudo[2696]: pam_unix(sudo:session): session closed for user root Jul 15 23:17:37.848027 kubelet[2656]: I0715 23:17:37.847981 2656 apiserver.go:52] "Watching apiserver" Jul 15 23:17:37.874745 kubelet[2656]: I0715 23:17:37.874710 2656 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:17:37.917642 kubelet[2656]: I0715 23:17:37.917616 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.917821 kubelet[2656]: I0715 23:17:37.917697 2656 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:37.921008 kubelet[2656]: E0715 23:17:37.920977 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:37.924326 kubelet[2656]: E0715 23:17:37.924298 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:17:37.926864 kubelet[2656]: E0715 23:17:37.926823 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:37.927456 kubelet[2656]: E0715 23:17:37.927432 2656 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:17:37.928263 kubelet[2656]: E0715 23:17:37.928246 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:37.954046 kubelet[2656]: I0715 23:17:37.953968 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.953951495 podStartE2EDuration="2.953951495s" podCreationTimestamp="2025-07-15 23:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:17:37.94604006 +0000 UTC m=+1.159297772" watchObservedRunningTime="2025-07-15 23:17:37.953951495 +0000 UTC m=+1.167209207" Jul 15 23:17:37.955638 kubelet[2656]: I0715 23:17:37.955585 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.95557036 podStartE2EDuration="2.95557036s" podCreationTimestamp="2025-07-15 23:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:17:37.954602577 +0000 UTC m=+1.167860289" watchObservedRunningTime="2025-07-15 23:17:37.95557036 +0000 UTC m=+1.168828112" Jul 15 23:17:37.972031 kubelet[2656]: I0715 23:17:37.971786 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.971768295 podStartE2EDuration="2.971768295s" podCreationTimestamp="2025-07-15 23:17:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:17:37.963621204 +0000 UTC m=+1.176878876" watchObservedRunningTime="2025-07-15 23:17:37.971768295 +0000 UTC m=+1.185026047" Jul 15 23:17:38.919614 kubelet[2656]: E0715 23:17:38.919449 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:38.919614 kubelet[2656]: E0715 23:17:38.919543 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:38.920125 kubelet[2656]: E0715 23:17:38.920103 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:39.505099 sudo[1736]: pam_unix(sudo:session): session closed for user root Jul 15 23:17:39.506851 sshd[1735]: Connection closed by 10.0.0.1 port 55556 Jul 15 23:17:39.507321 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 15 23:17:39.510940 systemd[1]: sshd@6-10.0.0.66:22-10.0.0.1:55556.service: Deactivated successfully. Jul 15 23:17:39.512793 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:17:39.512963 systemd[1]: session-7.scope: Consumed 10.072s CPU time, 260M memory peak. Jul 15 23:17:39.513899 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:17:39.515003 systemd-logind[1510]: Removed session 7. Jul 15 23:17:39.920878 kubelet[2656]: E0715 23:17:39.920746 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:40.922504 kubelet[2656]: E0715 23:17:40.922460 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:42.297173 kubelet[2656]: I0715 23:17:42.297141 2656 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:17:42.297863 kubelet[2656]: I0715 23:17:42.297748 2656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:17:42.297893 containerd[1530]: time="2025-07-15T23:17:42.297487171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:17:43.455645 systemd[1]: Created slice kubepods-burstable-pod620f76f2_06e8_402a_821a_100b435ef955.slice - libcontainer container kubepods-burstable-pod620f76f2_06e8_402a_821a_100b435ef955.slice. Jul 15 23:17:43.464569 systemd[1]: Created slice kubepods-besteffort-pod57a87e42_b994_49b0_a179_3afc85268b2f.slice - libcontainer container kubepods-besteffort-pod57a87e42_b994_49b0_a179_3afc85268b2f.slice. Jul 15 23:17:43.485305 systemd[1]: Created slice kubepods-besteffort-pod58a9a293_68c2_4917_a27a_6efcaa873138.slice - libcontainer container kubepods-besteffort-pod58a9a293_68c2_4917_a27a_6efcaa873138.slice. Jul 15 23:17:43.513695 kubelet[2656]: I0715 23:17:43.513638 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-xtables-lock\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.513695 kubelet[2656]: I0715 23:17:43.513682 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-run\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.513695 kubelet[2656]: I0715 23:17:43.513705 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-bpf-maps\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514594 kubelet[2656]: I0715 23:17:43.513729 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-hostproc\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514594 kubelet[2656]: I0715 23:17:43.513746 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-lib-modules\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514594 kubelet[2656]: I0715 23:17:43.513760 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-kernel\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514594 kubelet[2656]: I0715 23:17:43.513778 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsrq\" (UniqueName: \"kubernetes.io/projected/57a87e42-b994-49b0-a179-3afc85268b2f-kube-api-access-vxsrq\") pod \"kube-proxy-gh9zd\" (UID: \"57a87e42-b994-49b0-a179-3afc85268b2f\") " pod="kube-system/kube-proxy-gh9zd" Jul 15 23:17:43.514594 kubelet[2656]: I0715 23:17:43.513793 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/620f76f2-06e8-402a-821a-100b435ef955-clustermesh-secrets\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514594 kubelet[2656]: I0715 23:17:43.513810 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-hubble-tls\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514766 kubelet[2656]: I0715 23:17:43.513829 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-cgroup\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514766 kubelet[2656]: I0715 23:17:43.513843 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cni-path\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514766 kubelet[2656]: I0715 23:17:43.513901 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-etc-cni-netd\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514766 kubelet[2656]: I0715 23:17:43.513928 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbvhg\" (UniqueName: \"kubernetes.io/projected/58a9a293-68c2-4917-a27a-6efcaa873138-kube-api-access-tbvhg\") pod \"cilium-operator-6c4d7847fc-62lfb\" (UID: \"58a9a293-68c2-4917-a27a-6efcaa873138\") " pod="kube-system/cilium-operator-6c4d7847fc-62lfb" Jul 15 23:17:43.514766 kubelet[2656]: I0715 23:17:43.513982 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/620f76f2-06e8-402a-821a-100b435ef955-cilium-config-path\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514862 kubelet[2656]: I0715 23:17:43.514092 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57a87e42-b994-49b0-a179-3afc85268b2f-kube-proxy\") pod \"kube-proxy-gh9zd\" (UID: \"57a87e42-b994-49b0-a179-3afc85268b2f\") " pod="kube-system/kube-proxy-gh9zd" Jul 15 23:17:43.514862 kubelet[2656]: I0715 23:17:43.514157 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-net\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514862 kubelet[2656]: I0715 23:17:43.514177 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a9a293-68c2-4917-a27a-6efcaa873138-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-62lfb\" (UID: \"58a9a293-68c2-4917-a27a-6efcaa873138\") " pod="kube-system/cilium-operator-6c4d7847fc-62lfb" Jul 15 23:17:43.514862 kubelet[2656]: I0715 23:17:43.514196 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57a87e42-b994-49b0-a179-3afc85268b2f-lib-modules\") pod \"kube-proxy-gh9zd\" (UID: \"57a87e42-b994-49b0-a179-3afc85268b2f\") " pod="kube-system/kube-proxy-gh9zd" Jul 15 23:17:43.514862 kubelet[2656]: I0715 23:17:43.514229 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbfx\" (UniqueName: \"kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-kube-api-access-nsbfx\") pod \"cilium-4qt75\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " pod="kube-system/cilium-4qt75" Jul 15 23:17:43.514962 kubelet[2656]: I0715 23:17:43.514246 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57a87e42-b994-49b0-a179-3afc85268b2f-xtables-lock\") pod \"kube-proxy-gh9zd\" (UID: \"57a87e42-b994-49b0-a179-3afc85268b2f\") " pod="kube-system/kube-proxy-gh9zd" Jul 15 23:17:43.762937 kubelet[2656]: E0715 23:17:43.762044 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:43.778044 containerd[1530]: time="2025-07-15T23:17:43.777880178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qt75,Uid:620f76f2-06e8-402a-821a-100b435ef955,Namespace:kube-system,Attempt:0,}" Jul 15 23:17:43.782673 kubelet[2656]: E0715 23:17:43.782575 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:43.783162 containerd[1530]: time="2025-07-15T23:17:43.783132741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gh9zd,Uid:57a87e42-b994-49b0-a179-3afc85268b2f,Namespace:kube-system,Attempt:0,}" Jul 15 23:17:43.790692 kubelet[2656]: E0715 23:17:43.790664 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:43.791149 containerd[1530]: time="2025-07-15T23:17:43.791118331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-62lfb,Uid:58a9a293-68c2-4917-a27a-6efcaa873138,Namespace:kube-system,Attempt:0,}" Jul 15 23:17:43.833419 containerd[1530]: time="2025-07-15T23:17:43.833359164Z" level=info msg="connecting to shim d3d82bc7df53178cf66b8a4fc1df0a4e6369ae942432854085fe6ac1d21e7b5c" address="unix:///run/containerd/s/d3200901f7cb34868a0851919c6bb9b3d78d62aafd45fb3256740214fb1ff40b" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:17:43.837935 containerd[1530]: time="2025-07-15T23:17:43.837889087Z" level=info msg="connecting to shim 2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8" address="unix:///run/containerd/s/9f46372c21aed9f72756121861ba5721416d890bc65ab6fc1b56c89676dff882" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:17:43.879449 containerd[1530]: time="2025-07-15T23:17:43.879383240Z" level=info msg="connecting to shim 15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3" address="unix:///run/containerd/s/2c493e26a41142555a32ad6dea0c13702cd9d9e3e594f69d40714a5d33a20c23" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:17:43.904500 systemd[1]: Started cri-containerd-2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8.scope - libcontainer container 2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8. Jul 15 23:17:43.908424 systemd[1]: Started cri-containerd-15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3.scope - libcontainer container 15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3. Jul 15 23:17:43.945402 systemd[1]: Started cri-containerd-d3d82bc7df53178cf66b8a4fc1df0a4e6369ae942432854085fe6ac1d21e7b5c.scope - libcontainer container d3d82bc7df53178cf66b8a4fc1df0a4e6369ae942432854085fe6ac1d21e7b5c. Jul 15 23:17:43.949120 containerd[1530]: time="2025-07-15T23:17:43.948958744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4qt75,Uid:620f76f2-06e8-402a-821a-100b435ef955,Namespace:kube-system,Attempt:0,} returns sandbox id \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\"" Jul 15 23:17:43.951880 kubelet[2656]: E0715 23:17:43.951721 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:43.955048 containerd[1530]: time="2025-07-15T23:17:43.954658971Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:17:43.965638 containerd[1530]: time="2025-07-15T23:17:43.965598600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-62lfb,Uid:58a9a293-68c2-4917-a27a-6efcaa873138,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\"" Jul 15 23:17:43.966448 kubelet[2656]: E0715 23:17:43.966266 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:43.983950 containerd[1530]: time="2025-07-15T23:17:43.983909625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gh9zd,Uid:57a87e42-b994-49b0-a179-3afc85268b2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3d82bc7df53178cf66b8a4fc1df0a4e6369ae942432854085fe6ac1d21e7b5c\"" Jul 15 23:17:43.984653 kubelet[2656]: E0715 23:17:43.984626 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:43.989306 containerd[1530]: time="2025-07-15T23:17:43.989234192Z" level=info msg="CreateContainer within sandbox \"d3d82bc7df53178cf66b8a4fc1df0a4e6369ae942432854085fe6ac1d21e7b5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:17:43.996634 containerd[1530]: time="2025-07-15T23:17:43.996594788Z" level=info msg="Container 6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:44.015008 containerd[1530]: time="2025-07-15T23:17:44.014888149Z" level=info msg="CreateContainer within sandbox \"d3d82bc7df53178cf66b8a4fc1df0a4e6369ae942432854085fe6ac1d21e7b5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929\"" Jul 15 23:17:44.015828 containerd[1530]: time="2025-07-15T23:17:44.015787516Z" level=info msg="StartContainer for \"6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929\"" Jul 15 23:17:44.017374 containerd[1530]: time="2025-07-15T23:17:44.017343317Z" level=info msg="connecting to shim 6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929" address="unix:///run/containerd/s/d3200901f7cb34868a0851919c6bb9b3d78d62aafd45fb3256740214fb1ff40b" protocol=ttrpc version=3 Jul 15 23:17:44.038415 systemd[1]: Started cri-containerd-6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929.scope - libcontainer container 6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929. Jul 15 23:17:44.085184 containerd[1530]: time="2025-07-15T23:17:44.085135972Z" level=info msg="StartContainer for \"6bf0d9391cecddd2f88620f581f09afc6492e29d1771cb747137278d6afba929\" returns successfully" Jul 15 23:17:44.296342 kubelet[2656]: E0715 23:17:44.295987 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:44.416403 kubelet[2656]: E0715 23:17:44.414004 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:44.933188 kubelet[2656]: E0715 23:17:44.932652 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:44.933188 kubelet[2656]: E0715 23:17:44.932764 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:44.933188 kubelet[2656]: E0715 23:17:44.933004 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:44.946221 kubelet[2656]: I0715 23:17:44.945943 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gh9zd" podStartSLOduration=1.945926166 podStartE2EDuration="1.945926166s" podCreationTimestamp="2025-07-15 23:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:17:44.945819921 +0000 UTC m=+8.159077633" watchObservedRunningTime="2025-07-15 23:17:44.945926166 +0000 UTC m=+8.159183878" Jul 15 23:17:45.934680 kubelet[2656]: E0715 23:17:45.934638 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:49.266916 kubelet[2656]: E0715 23:17:49.266853 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:49.949725 kubelet[2656]: E0715 23:17:49.949686 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:52.457708 update_engine[1521]: I20250715 23:17:52.457610 1521 update_attempter.cc:509] Updating boot flags... Jul 15 23:17:53.254410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4136136212.mount: Deactivated successfully. Jul 15 23:17:57.365229 containerd[1530]: time="2025-07-15T23:17:57.365012399Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:57.372198 containerd[1530]: time="2025-07-15T23:17:57.372158566Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 15 23:17:57.373613 containerd[1530]: time="2025-07-15T23:17:57.373542774Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:57.375288 containerd[1530]: time="2025-07-15T23:17:57.375234072Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.42018284s" Jul 15 23:17:57.375288 containerd[1530]: time="2025-07-15T23:17:57.375268513Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 23:17:57.376257 containerd[1530]: time="2025-07-15T23:17:57.376233706Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:17:57.389654 containerd[1530]: time="2025-07-15T23:17:57.389574047Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:17:57.399308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount277119562.mount: Deactivated successfully. Jul 15 23:17:57.432251 containerd[1530]: time="2025-07-15T23:17:57.432116475Z" level=info msg="Container 879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:57.433794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2077268870.mount: Deactivated successfully. Jul 15 23:17:57.445676 containerd[1530]: time="2025-07-15T23:17:57.445532097Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\"" Jul 15 23:17:57.446130 containerd[1530]: time="2025-07-15T23:17:57.445996473Z" level=info msg="StartContainer for \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\"" Jul 15 23:17:57.448968 containerd[1530]: time="2025-07-15T23:17:57.448916094Z" level=info msg="connecting to shim 879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16" address="unix:///run/containerd/s/9f46372c21aed9f72756121861ba5721416d890bc65ab6fc1b56c89676dff882" protocol=ttrpc version=3 Jul 15 23:17:57.480374 systemd[1]: Started cri-containerd-879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16.scope - libcontainer container 879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16. Jul 15 23:17:57.512990 containerd[1530]: time="2025-07-15T23:17:57.512948904Z" level=info msg="StartContainer for \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" returns successfully" Jul 15 23:17:57.602718 systemd[1]: cri-containerd-879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16.scope: Deactivated successfully. Jul 15 23:17:57.612598 containerd[1530]: time="2025-07-15T23:17:57.612447497Z" level=info msg="TaskExit event in podsandbox handler container_id:\"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" id:\"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" pid:3105 exited_at:{seconds:1752621477 nanos:604529983}" Jul 15 23:17:57.612598 containerd[1530]: time="2025-07-15T23:17:57.612458337Z" level=info msg="received exit event container_id:\"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" id:\"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" pid:3105 exited_at:{seconds:1752621477 nanos:604529983}" Jul 15 23:17:57.968923 kubelet[2656]: E0715 23:17:57.968887 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:57.984222 containerd[1530]: time="2025-07-15T23:17:57.984170042Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:17:57.997242 containerd[1530]: time="2025-07-15T23:17:57.996729836Z" level=info msg="Container 5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:58.001468 containerd[1530]: time="2025-07-15T23:17:58.001425596Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\"" Jul 15 23:17:58.003296 containerd[1530]: time="2025-07-15T23:17:58.002146181Z" level=info msg="StartContainer for \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\"" Jul 15 23:17:58.003296 containerd[1530]: time="2025-07-15T23:17:58.002944807Z" level=info msg="connecting to shim 5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a" address="unix:///run/containerd/s/9f46372c21aed9f72756121861ba5721416d890bc65ab6fc1b56c89676dff882" protocol=ttrpc version=3 Jul 15 23:17:58.030377 systemd[1]: Started cri-containerd-5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a.scope - libcontainer container 5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a. Jul 15 23:17:58.063083 containerd[1530]: time="2025-07-15T23:17:58.062961493Z" level=info msg="StartContainer for \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" returns successfully" Jul 15 23:17:58.072501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:17:58.072736 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:17:58.073036 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:17:58.074340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:17:58.076569 systemd[1]: cri-containerd-5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a.scope: Deactivated successfully. Jul 15 23:17:58.077560 containerd[1530]: time="2025-07-15T23:17:58.077368255Z" level=info msg="received exit event container_id:\"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" id:\"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" pid:3150 exited_at:{seconds:1752621478 nanos:77082805}" Jul 15 23:17:58.078133 containerd[1530]: time="2025-07-15T23:17:58.078057158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" id:\"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" pid:3150 exited_at:{seconds:1752621478 nanos:77082805}" Jul 15 23:17:58.105598 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:17:58.396995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16-rootfs.mount: Deactivated successfully. Jul 15 23:17:58.587505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3814245276.mount: Deactivated successfully. Jul 15 23:17:58.914793 containerd[1530]: time="2025-07-15T23:17:58.914745445Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:58.915200 containerd[1530]: time="2025-07-15T23:17:58.915117537Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 15 23:17:58.915962 containerd[1530]: time="2025-07-15T23:17:58.915937124Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:17:58.917110 containerd[1530]: time="2025-07-15T23:17:58.917074842Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.54066765s" Jul 15 23:17:58.917152 containerd[1530]: time="2025-07-15T23:17:58.917106404Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 23:17:58.921511 containerd[1530]: time="2025-07-15T23:17:58.921486870Z" level=info msg="CreateContainer within sandbox \"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:17:58.928265 containerd[1530]: time="2025-07-15T23:17:58.928230695Z" level=info msg="Container 938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:58.935134 containerd[1530]: time="2025-07-15T23:17:58.935082164Z" level=info msg="CreateContainer within sandbox \"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\"" Jul 15 23:17:58.935682 containerd[1530]: time="2025-07-15T23:17:58.935660904Z" level=info msg="StartContainer for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\"" Jul 15 23:17:58.936763 containerd[1530]: time="2025-07-15T23:17:58.936727139Z" level=info msg="connecting to shim 938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104" address="unix:///run/containerd/s/2c493e26a41142555a32ad6dea0c13702cd9d9e3e594f69d40714a5d33a20c23" protocol=ttrpc version=3 Jul 15 23:17:58.959481 systemd[1]: Started cri-containerd-938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104.scope - libcontainer container 938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104. Jul 15 23:17:58.974489 kubelet[2656]: E0715 23:17:58.973530 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:58.979620 containerd[1530]: time="2025-07-15T23:17:58.979583572Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:17:58.992198 containerd[1530]: time="2025-07-15T23:17:58.992096230Z" level=info msg="Container 33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:17:58.998636 containerd[1530]: time="2025-07-15T23:17:58.998594887Z" level=info msg="StartContainer for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" returns successfully" Jul 15 23:17:59.014481 containerd[1530]: time="2025-07-15T23:17:59.014250276Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\"" Jul 15 23:17:59.014836 containerd[1530]: time="2025-07-15T23:17:59.014808414Z" level=info msg="StartContainer for \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\"" Jul 15 23:17:59.016362 containerd[1530]: time="2025-07-15T23:17:59.016319383Z" level=info msg="connecting to shim 33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb" address="unix:///run/containerd/s/9f46372c21aed9f72756121861ba5721416d890bc65ab6fc1b56c89676dff882" protocol=ttrpc version=3 Jul 15 23:17:59.043379 systemd[1]: Started cri-containerd-33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb.scope - libcontainer container 33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb. Jul 15 23:17:59.113531 containerd[1530]: time="2025-07-15T23:17:59.113489249Z" level=info msg="StartContainer for \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" returns successfully" Jul 15 23:17:59.118701 systemd[1]: cri-containerd-33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb.scope: Deactivated successfully. Jul 15 23:17:59.120633 containerd[1530]: time="2025-07-15T23:17:59.120552358Z" level=info msg="received exit event container_id:\"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" id:\"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" pid:3245 exited_at:{seconds:1752621479 nanos:120371792}" Jul 15 23:17:59.120937 containerd[1530]: time="2025-07-15T23:17:59.120767005Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" id:\"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" pid:3245 exited_at:{seconds:1752621479 nanos:120371792}" Jul 15 23:17:59.988278 kubelet[2656]: E0715 23:17:59.988246 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:59.991449 kubelet[2656]: E0715 23:17:59.991410 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:17:59.998640 containerd[1530]: time="2025-07-15T23:17:59.998436905Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:18:00.001440 kubelet[2656]: I0715 23:18:00.001321 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-62lfb" podStartSLOduration=2.050189673 podStartE2EDuration="17.001299477s" podCreationTimestamp="2025-07-15 23:17:43 +0000 UTC" firstStartedPulling="2025-07-15 23:17:43.966877349 +0000 UTC m=+7.180135061" lastFinishedPulling="2025-07-15 23:17:58.917987153 +0000 UTC m=+22.131244865" observedRunningTime="2025-07-15 23:18:00.000375648 +0000 UTC m=+23.213633360" watchObservedRunningTime="2025-07-15 23:18:00.001299477 +0000 UTC m=+23.214557189" Jul 15 23:18:00.014245 containerd[1530]: time="2025-07-15T23:18:00.013116008Z" level=info msg="Container dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:00.023756 containerd[1530]: time="2025-07-15T23:18:00.023603577Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\"" Jul 15 23:18:00.024936 containerd[1530]: time="2025-07-15T23:18:00.024632489Z" level=info msg="StartContainer for \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\"" Jul 15 23:18:00.025961 containerd[1530]: time="2025-07-15T23:18:00.025915970Z" level=info msg="connecting to shim dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317" address="unix:///run/containerd/s/9f46372c21aed9f72756121861ba5721416d890bc65ab6fc1b56c89676dff882" protocol=ttrpc version=3 Jul 15 23:18:00.052433 systemd[1]: Started cri-containerd-dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317.scope - libcontainer container dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317. Jul 15 23:18:00.102820 systemd[1]: cri-containerd-dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317.scope: Deactivated successfully. Jul 15 23:18:00.104833 containerd[1530]: time="2025-07-15T23:18:00.104475594Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" id:\"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" pid:3286 exited_at:{seconds:1752621480 nanos:104037300}" Jul 15 23:18:00.110450 containerd[1530]: time="2025-07-15T23:18:00.110411940Z" level=info msg="received exit event container_id:\"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" id:\"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" pid:3286 exited_at:{seconds:1752621480 nanos:104037300}" Jul 15 23:18:00.112244 containerd[1530]: time="2025-07-15T23:18:00.112196276Z" level=info msg="StartContainer for \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" returns successfully" Jul 15 23:18:00.396900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317-rootfs.mount: Deactivated successfully. Jul 15 23:18:00.995094 kubelet[2656]: E0715 23:18:00.994906 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:00.995094 kubelet[2656]: E0715 23:18:00.995012 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:01.000430 containerd[1530]: time="2025-07-15T23:18:01.000373817Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:18:01.019748 containerd[1530]: time="2025-07-15T23:18:01.019558241Z" level=info msg="Container 13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:01.025298 containerd[1530]: time="2025-07-15T23:18:01.025255534Z" level=info msg="CreateContainer within sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\"" Jul 15 23:18:01.026219 containerd[1530]: time="2025-07-15T23:18:01.025976196Z" level=info msg="StartContainer for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\"" Jul 15 23:18:01.027136 containerd[1530]: time="2025-07-15T23:18:01.027089230Z" level=info msg="connecting to shim 13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648" address="unix:///run/containerd/s/9f46372c21aed9f72756121861ba5721416d890bc65ab6fc1b56c89676dff882" protocol=ttrpc version=3 Jul 15 23:18:01.048389 systemd[1]: Started cri-containerd-13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648.scope - libcontainer container 13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648. Jul 15 23:18:01.076912 containerd[1530]: time="2025-07-15T23:18:01.076867343Z" level=info msg="StartContainer for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" returns successfully" Jul 15 23:18:01.170240 containerd[1530]: time="2025-07-15T23:18:01.169842568Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" id:\"1a8c14010265178ed4363813962e74d396fa895a7e5108481571b18112f1f064\" pid:3353 exited_at:{seconds:1752621481 nanos:169367394}" Jul 15 23:18:01.195527 kubelet[2656]: I0715 23:18:01.195494 2656 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:18:01.274406 systemd[1]: Created slice kubepods-burstable-pod35d36c09_4202_4786_a21b_62ae9e1ddcf8.slice - libcontainer container kubepods-burstable-pod35d36c09_4202_4786_a21b_62ae9e1ddcf8.slice. Jul 15 23:18:01.282125 systemd[1]: Created slice kubepods-burstable-pod770d6fbd_fcf1_45b8_9ad5_830fbd1ce442.slice - libcontainer container kubepods-burstable-pod770d6fbd_fcf1_45b8_9ad5_830fbd1ce442.slice. Jul 15 23:18:01.344259 kubelet[2656]: I0715 23:18:01.340795 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/770d6fbd-fcf1-45b8-9ad5-830fbd1ce442-config-volume\") pod \"coredns-674b8bbfcf-xdg96\" (UID: \"770d6fbd-fcf1-45b8-9ad5-830fbd1ce442\") " pod="kube-system/coredns-674b8bbfcf-xdg96" Jul 15 23:18:01.344259 kubelet[2656]: I0715 23:18:01.340841 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35d36c09-4202-4786-a21b-62ae9e1ddcf8-config-volume\") pod \"coredns-674b8bbfcf-82jgp\" (UID: \"35d36c09-4202-4786-a21b-62ae9e1ddcf8\") " pod="kube-system/coredns-674b8bbfcf-82jgp" Jul 15 23:18:01.344259 kubelet[2656]: I0715 23:18:01.340865 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kv7j\" (UniqueName: \"kubernetes.io/projected/35d36c09-4202-4786-a21b-62ae9e1ddcf8-kube-api-access-8kv7j\") pod \"coredns-674b8bbfcf-82jgp\" (UID: \"35d36c09-4202-4786-a21b-62ae9e1ddcf8\") " pod="kube-system/coredns-674b8bbfcf-82jgp" Jul 15 23:18:01.344259 kubelet[2656]: I0715 23:18:01.340888 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r4g\" (UniqueName: \"kubernetes.io/projected/770d6fbd-fcf1-45b8-9ad5-830fbd1ce442-kube-api-access-q9r4g\") pod \"coredns-674b8bbfcf-xdg96\" (UID: \"770d6fbd-fcf1-45b8-9ad5-830fbd1ce442\") " pod="kube-system/coredns-674b8bbfcf-xdg96" Jul 15 23:18:01.579635 kubelet[2656]: E0715 23:18:01.579383 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:01.580428 containerd[1530]: time="2025-07-15T23:18:01.580367484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-82jgp,Uid:35d36c09-4202-4786-a21b-62ae9e1ddcf8,Namespace:kube-system,Attempt:0,}" Jul 15 23:18:01.585255 kubelet[2656]: E0715 23:18:01.585203 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:01.586583 containerd[1530]: time="2025-07-15T23:18:01.585680685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xdg96,Uid:770d6fbd-fcf1-45b8-9ad5-830fbd1ce442,Namespace:kube-system,Attempt:0,}" Jul 15 23:18:02.001387 kubelet[2656]: E0715 23:18:02.001347 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:02.029587 kubelet[2656]: I0715 23:18:02.029365 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4qt75" podStartSLOduration=5.606022067 podStartE2EDuration="19.029349021s" podCreationTimestamp="2025-07-15 23:17:43 +0000 UTC" firstStartedPulling="2025-07-15 23:17:43.952644103 +0000 UTC m=+7.165901815" lastFinishedPulling="2025-07-15 23:17:57.375971057 +0000 UTC m=+20.589228769" observedRunningTime="2025-07-15 23:18:02.029286859 +0000 UTC m=+25.242544571" watchObservedRunningTime="2025-07-15 23:18:02.029349021 +0000 UTC m=+25.242606693" Jul 15 23:18:03.002720 kubelet[2656]: E0715 23:18:03.002684 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:03.348156 systemd-networkd[1464]: cilium_host: Link UP Jul 15 23:18:03.349389 systemd-networkd[1464]: cilium_net: Link UP Jul 15 23:18:03.349561 systemd-networkd[1464]: cilium_net: Gained carrier Jul 15 23:18:03.349684 systemd-networkd[1464]: cilium_host: Gained carrier Jul 15 23:18:03.435548 systemd-networkd[1464]: cilium_vxlan: Link UP Jul 15 23:18:03.435555 systemd-networkd[1464]: cilium_vxlan: Gained carrier Jul 15 23:18:03.842262 kernel: NET: Registered PF_ALG protocol family Jul 15 23:18:04.012705 kubelet[2656]: E0715 23:18:04.012655 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:04.056344 systemd-networkd[1464]: cilium_host: Gained IPv6LL Jul 15 23:18:04.120369 systemd-networkd[1464]: cilium_net: Gained IPv6LL Jul 15 23:18:04.458823 systemd-networkd[1464]: lxc_health: Link UP Jul 15 23:18:04.461499 systemd-networkd[1464]: lxc_health: Gained carrier Jul 15 23:18:04.504365 systemd-networkd[1464]: cilium_vxlan: Gained IPv6LL Jul 15 23:18:04.683977 kernel: eth0: renamed from tmpca0b4 Jul 15 23:18:04.685020 systemd-networkd[1464]: lxc8e7fd0ac7d92: Link UP Jul 15 23:18:04.687303 systemd-networkd[1464]: lxca9bcdc072ac5: Link UP Jul 15 23:18:04.693106 systemd-networkd[1464]: lxc8e7fd0ac7d92: Gained carrier Jul 15 23:18:04.697230 kernel: eth0: renamed from tmpb51fa Jul 15 23:18:04.697999 systemd-networkd[1464]: lxca9bcdc072ac5: Gained carrier Jul 15 23:18:05.775106 kubelet[2656]: E0715 23:18:05.775058 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:05.786081 systemd-networkd[1464]: lxc8e7fd0ac7d92: Gained IPv6LL Jul 15 23:18:05.946979 systemd[1]: Started sshd@7-10.0.0.66:22-10.0.0.1:49780.service - OpenSSH per-connection server daemon (10.0.0.1:49780). Jul 15 23:18:05.977651 systemd-networkd[1464]: lxc_health: Gained IPv6LL Jul 15 23:18:06.005589 sshd[3830]: Accepted publickey for core from 10.0.0.1 port 49780 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:06.008324 sshd-session[3830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:06.010098 kubelet[2656]: E0715 23:18:06.009718 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:06.014417 systemd-logind[1510]: New session 8 of user core. Jul 15 23:18:06.033439 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:18:06.179449 sshd[3832]: Connection closed by 10.0.0.1 port 49780 Jul 15 23:18:06.180156 sshd-session[3830]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:06.182903 systemd[1]: sshd@7-10.0.0.66:22-10.0.0.1:49780.service: Deactivated successfully. Jul 15 23:18:06.184576 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:18:06.187199 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:18:06.188826 systemd-logind[1510]: Removed session 8. Jul 15 23:18:06.616686 systemd-networkd[1464]: lxca9bcdc072ac5: Gained IPv6LL Jul 15 23:18:08.324592 containerd[1530]: time="2025-07-15T23:18:08.324355733Z" level=info msg="connecting to shim b51fa96ac4c61c5e536782d203ff578e8915aea9d9c6d17783b7f35a92cb8457" address="unix:///run/containerd/s/0ec10f5edac8e5dbef4a1bcd3888b2e72e23cdd73919521d39aef1afe8f7af8c" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:18:08.325994 containerd[1530]: time="2025-07-15T23:18:08.325948892Z" level=info msg="connecting to shim ca0b435e2d4e04c23627f19b4f0aa214fcf70011b9da89800088e720341822c8" address="unix:///run/containerd/s/559681318a3ea55ea9e3615ce35f38559af6b701e437c28bf23233c724395f6d" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:18:08.352416 systemd[1]: Started cri-containerd-ca0b435e2d4e04c23627f19b4f0aa214fcf70011b9da89800088e720341822c8.scope - libcontainer container ca0b435e2d4e04c23627f19b4f0aa214fcf70011b9da89800088e720341822c8. Jul 15 23:18:08.355475 systemd[1]: Started cri-containerd-b51fa96ac4c61c5e536782d203ff578e8915aea9d9c6d17783b7f35a92cb8457.scope - libcontainer container b51fa96ac4c61c5e536782d203ff578e8915aea9d9c6d17783b7f35a92cb8457. Jul 15 23:18:08.365026 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:18:08.372789 systemd-resolved[1383]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:18:08.396727 containerd[1530]: time="2025-07-15T23:18:08.396629252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-82jgp,Uid:35d36c09-4202-4786-a21b-62ae9e1ddcf8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca0b435e2d4e04c23627f19b4f0aa214fcf70011b9da89800088e720341822c8\"" Jul 15 23:18:08.398096 kubelet[2656]: E0715 23:18:08.397879 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:08.399046 containerd[1530]: time="2025-07-15T23:18:08.398666461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xdg96,Uid:770d6fbd-fcf1-45b8-9ad5-830fbd1ce442,Namespace:kube-system,Attempt:0,} returns sandbox id \"b51fa96ac4c61c5e536782d203ff578e8915aea9d9c6d17783b7f35a92cb8457\"" Jul 15 23:18:08.400182 kubelet[2656]: E0715 23:18:08.400154 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:08.404623 containerd[1530]: time="2025-07-15T23:18:08.404548324Z" level=info msg="CreateContainer within sandbox \"ca0b435e2d4e04c23627f19b4f0aa214fcf70011b9da89800088e720341822c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:18:08.407482 containerd[1530]: time="2025-07-15T23:18:08.407437315Z" level=info msg="CreateContainer within sandbox \"b51fa96ac4c61c5e536782d203ff578e8915aea9d9c6d17783b7f35a92cb8457\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:18:08.415814 containerd[1530]: time="2025-07-15T23:18:08.415765037Z" level=info msg="Container 1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:08.422631 containerd[1530]: time="2025-07-15T23:18:08.422253675Z" level=info msg="Container 6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:08.425968 containerd[1530]: time="2025-07-15T23:18:08.425917444Z" level=info msg="CreateContainer within sandbox \"ca0b435e2d4e04c23627f19b4f0aa214fcf70011b9da89800088e720341822c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8\"" Jul 15 23:18:08.426425 containerd[1530]: time="2025-07-15T23:18:08.426398936Z" level=info msg="StartContainer for \"1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8\"" Jul 15 23:18:08.427281 containerd[1530]: time="2025-07-15T23:18:08.427255237Z" level=info msg="connecting to shim 1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8" address="unix:///run/containerd/s/559681318a3ea55ea9e3615ce35f38559af6b701e437c28bf23233c724395f6d" protocol=ttrpc version=3 Jul 15 23:18:08.431232 containerd[1530]: time="2025-07-15T23:18:08.430985368Z" level=info msg="CreateContainer within sandbox \"b51fa96ac4c61c5e536782d203ff578e8915aea9d9c6d17783b7f35a92cb8457\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8\"" Jul 15 23:18:08.431702 containerd[1530]: time="2025-07-15T23:18:08.431598663Z" level=info msg="StartContainer for \"6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8\"" Jul 15 23:18:08.433267 containerd[1530]: time="2025-07-15T23:18:08.432586167Z" level=info msg="connecting to shim 6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8" address="unix:///run/containerd/s/0ec10f5edac8e5dbef4a1bcd3888b2e72e23cdd73919521d39aef1afe8f7af8c" protocol=ttrpc version=3 Jul 15 23:18:08.451409 systemd[1]: Started cri-containerd-6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8.scope - libcontainer container 6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8. Jul 15 23:18:08.455743 systemd[1]: Started cri-containerd-1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8.scope - libcontainer container 1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8. Jul 15 23:18:08.489288 containerd[1530]: time="2025-07-15T23:18:08.489250266Z" level=info msg="StartContainer for \"1eb9d58e6e8172ba54214c23d36fbb6a2b595c92d2809b6273fadf30c17208f8\" returns successfully" Jul 15 23:18:08.492875 containerd[1530]: time="2025-07-15T23:18:08.492754351Z" level=info msg="StartContainer for \"6b9e46fa9de726f7496e4f9704513e278781c2219c81df995c6544ec4050c0b8\" returns successfully" Jul 15 23:18:09.015659 kubelet[2656]: E0715 23:18:09.015424 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:09.019366 kubelet[2656]: E0715 23:18:09.019021 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:09.030609 kubelet[2656]: I0715 23:18:09.027865 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xdg96" podStartSLOduration=26.027836952 podStartE2EDuration="26.027836952s" podCreationTimestamp="2025-07-15 23:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:18:09.027408141 +0000 UTC m=+32.240665813" watchObservedRunningTime="2025-07-15 23:18:09.027836952 +0000 UTC m=+32.241094664" Jul 15 23:18:10.020759 kubelet[2656]: E0715 23:18:10.020732 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:10.020759 kubelet[2656]: E0715 23:18:10.020745 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:11.022788 kubelet[2656]: E0715 23:18:11.022712 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:11.022788 kubelet[2656]: E0715 23:18:11.022751 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:11.193754 systemd[1]: Started sshd@8-10.0.0.66:22-10.0.0.1:49782.service - OpenSSH per-connection server daemon (10.0.0.1:49782). Jul 15 23:18:11.255170 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 49782 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:11.256702 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:11.261640 systemd-logind[1510]: New session 9 of user core. Jul 15 23:18:11.274423 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:18:11.395105 sshd[4021]: Connection closed by 10.0.0.1 port 49782 Jul 15 23:18:11.395448 sshd-session[4019]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:11.399101 systemd[1]: sshd@8-10.0.0.66:22-10.0.0.1:49782.service: Deactivated successfully. Jul 15 23:18:11.401135 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:18:11.402167 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:18:11.403768 systemd-logind[1510]: Removed session 9. Jul 15 23:18:16.411021 systemd[1]: Started sshd@9-10.0.0.66:22-10.0.0.1:48932.service - OpenSSH per-connection server daemon (10.0.0.1:48932). Jul 15 23:18:16.467418 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 48932 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:16.468720 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:16.474182 systemd-logind[1510]: New session 10 of user core. Jul 15 23:18:16.489424 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:18:16.600768 sshd[4041]: Connection closed by 10.0.0.1 port 48932 Jul 15 23:18:16.601092 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:16.603832 systemd[1]: sshd@9-10.0.0.66:22-10.0.0.1:48932.service: Deactivated successfully. Jul 15 23:18:16.605925 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:18:16.607771 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:18:16.609630 systemd-logind[1510]: Removed session 10. Jul 15 23:18:21.615121 systemd[1]: Started sshd@10-10.0.0.66:22-10.0.0.1:48936.service - OpenSSH per-connection server daemon (10.0.0.1:48936). Jul 15 23:18:21.676990 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 48936 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:21.678428 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:21.682773 systemd-logind[1510]: New session 11 of user core. Jul 15 23:18:21.693395 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:18:21.806810 sshd[4059]: Connection closed by 10.0.0.1 port 48936 Jul 15 23:18:21.807153 sshd-session[4057]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:21.825408 systemd[1]: sshd@10-10.0.0.66:22-10.0.0.1:48936.service: Deactivated successfully. Jul 15 23:18:21.827039 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:18:21.828621 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:18:21.830119 systemd[1]: Started sshd@11-10.0.0.66:22-10.0.0.1:48946.service - OpenSSH per-connection server daemon (10.0.0.1:48946). Jul 15 23:18:21.831115 systemd-logind[1510]: Removed session 11. Jul 15 23:18:21.885528 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 48946 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:21.886713 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:21.890586 systemd-logind[1510]: New session 12 of user core. Jul 15 23:18:21.903367 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:18:22.054615 sshd[4076]: Connection closed by 10.0.0.1 port 48946 Jul 15 23:18:22.055177 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:22.066033 systemd[1]: sshd@11-10.0.0.66:22-10.0.0.1:48946.service: Deactivated successfully. Jul 15 23:18:22.069736 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:18:22.071750 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:18:22.076092 systemd[1]: Started sshd@12-10.0.0.66:22-10.0.0.1:48958.service - OpenSSH per-connection server daemon (10.0.0.1:48958). Jul 15 23:18:22.077721 systemd-logind[1510]: Removed session 12. Jul 15 23:18:22.132220 sshd[4087]: Accepted publickey for core from 10.0.0.1 port 48958 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:22.133478 sshd-session[4087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:22.137395 systemd-logind[1510]: New session 13 of user core. Jul 15 23:18:22.147409 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:18:22.266810 sshd[4089]: Connection closed by 10.0.0.1 port 48958 Jul 15 23:18:22.267400 sshd-session[4087]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:22.274649 systemd[1]: sshd@12-10.0.0.66:22-10.0.0.1:48958.service: Deactivated successfully. Jul 15 23:18:22.276334 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:18:22.278076 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:18:22.279238 systemd-logind[1510]: Removed session 13. Jul 15 23:18:27.285176 systemd[1]: Started sshd@13-10.0.0.66:22-10.0.0.1:45306.service - OpenSSH per-connection server daemon (10.0.0.1:45306). Jul 15 23:18:27.371541 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 45306 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:27.372916 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:27.386719 systemd-logind[1510]: New session 14 of user core. Jul 15 23:18:27.403898 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:18:27.531812 sshd[4105]: Connection closed by 10.0.0.1 port 45306 Jul 15 23:18:27.532083 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:27.535912 systemd[1]: sshd@13-10.0.0.66:22-10.0.0.1:45306.service: Deactivated successfully. Jul 15 23:18:27.537496 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:18:27.539341 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:18:27.541303 systemd-logind[1510]: Removed session 14. Jul 15 23:18:32.543359 systemd[1]: Started sshd@14-10.0.0.66:22-10.0.0.1:53632.service - OpenSSH per-connection server daemon (10.0.0.1:53632). Jul 15 23:18:32.603694 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 53632 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:32.604892 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:32.609140 systemd-logind[1510]: New session 15 of user core. Jul 15 23:18:32.616453 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:18:32.725312 sshd[4121]: Connection closed by 10.0.0.1 port 53632 Jul 15 23:18:32.725958 sshd-session[4119]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:32.738356 systemd[1]: sshd@14-10.0.0.66:22-10.0.0.1:53632.service: Deactivated successfully. Jul 15 23:18:32.739867 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:18:32.742019 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:18:32.743923 systemd[1]: Started sshd@15-10.0.0.66:22-10.0.0.1:53640.service - OpenSSH per-connection server daemon (10.0.0.1:53640). Jul 15 23:18:32.744652 systemd-logind[1510]: Removed session 15. Jul 15 23:18:32.803796 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 53640 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:32.805040 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:32.809200 systemd-logind[1510]: New session 16 of user core. Jul 15 23:18:32.817371 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:18:33.029251 sshd[4137]: Connection closed by 10.0.0.1 port 53640 Jul 15 23:18:33.030261 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:33.041799 systemd[1]: sshd@15-10.0.0.66:22-10.0.0.1:53640.service: Deactivated successfully. Jul 15 23:18:33.045107 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:18:33.047076 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:18:33.049319 systemd[1]: Started sshd@16-10.0.0.66:22-10.0.0.1:53652.service - OpenSSH per-connection server daemon (10.0.0.1:53652). Jul 15 23:18:33.051670 systemd-logind[1510]: Removed session 16. Jul 15 23:18:33.112858 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 53652 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:33.114186 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:33.118058 systemd-logind[1510]: New session 17 of user core. Jul 15 23:18:33.130368 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:18:33.690290 sshd[4150]: Connection closed by 10.0.0.1 port 53652 Jul 15 23:18:33.690633 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:33.703147 systemd[1]: sshd@16-10.0.0.66:22-10.0.0.1:53652.service: Deactivated successfully. Jul 15 23:18:33.709160 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:18:33.711395 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:18:33.717569 systemd[1]: Started sshd@17-10.0.0.66:22-10.0.0.1:53658.service - OpenSSH per-connection server daemon (10.0.0.1:53658). Jul 15 23:18:33.720737 systemd-logind[1510]: Removed session 17. Jul 15 23:18:33.777266 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 53658 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:33.778267 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:33.782513 systemd-logind[1510]: New session 18 of user core. Jul 15 23:18:33.789392 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:18:34.033409 sshd[4171]: Connection closed by 10.0.0.1 port 53658 Jul 15 23:18:34.033508 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:34.048079 systemd[1]: sshd@17-10.0.0.66:22-10.0.0.1:53658.service: Deactivated successfully. Jul 15 23:18:34.050315 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:18:34.052441 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:18:34.054139 systemd[1]: Started sshd@18-10.0.0.66:22-10.0.0.1:53668.service - OpenSSH per-connection server daemon (10.0.0.1:53668). Jul 15 23:18:34.055191 systemd-logind[1510]: Removed session 18. Jul 15 23:18:34.105747 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 53668 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:34.107458 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:34.112331 systemd-logind[1510]: New session 19 of user core. Jul 15 23:18:34.121390 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:18:34.230234 sshd[4185]: Connection closed by 10.0.0.1 port 53668 Jul 15 23:18:34.230488 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:34.234579 systemd[1]: sshd@18-10.0.0.66:22-10.0.0.1:53668.service: Deactivated successfully. Jul 15 23:18:34.236374 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:18:34.237112 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:18:34.238621 systemd-logind[1510]: Removed session 19. Jul 15 23:18:39.248528 systemd[1]: Started sshd@19-10.0.0.66:22-10.0.0.1:53676.service - OpenSSH per-connection server daemon (10.0.0.1:53676). Jul 15 23:18:39.308987 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 53676 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:39.313817 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:39.318931 systemd-logind[1510]: New session 20 of user core. Jul 15 23:18:39.332371 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:18:39.460987 sshd[4205]: Connection closed by 10.0.0.1 port 53676 Jul 15 23:18:39.461447 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:39.465552 systemd[1]: sshd@19-10.0.0.66:22-10.0.0.1:53676.service: Deactivated successfully. Jul 15 23:18:39.468676 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:18:39.475551 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:18:39.477156 systemd-logind[1510]: Removed session 20. Jul 15 23:18:44.475803 systemd[1]: Started sshd@20-10.0.0.66:22-10.0.0.1:42354.service - OpenSSH per-connection server daemon (10.0.0.1:42354). Jul 15 23:18:44.527849 sshd[4220]: Accepted publickey for core from 10.0.0.1 port 42354 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.529089 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.533273 systemd-logind[1510]: New session 21 of user core. Jul 15 23:18:44.547372 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:18:44.663240 sshd[4222]: Connection closed by 10.0.0.1 port 42354 Jul 15 23:18:44.663604 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:44.683441 systemd[1]: sshd@20-10.0.0.66:22-10.0.0.1:42354.service: Deactivated successfully. Jul 15 23:18:44.686237 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:18:44.689451 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:18:44.691490 systemd[1]: Started sshd@21-10.0.0.66:22-10.0.0.1:42366.service - OpenSSH per-connection server daemon (10.0.0.1:42366). Jul 15 23:18:44.692738 systemd-logind[1510]: Removed session 21. Jul 15 23:18:44.746610 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 42366 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.747872 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.752843 systemd-logind[1510]: New session 22 of user core. Jul 15 23:18:44.763416 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:18:46.263944 kubelet[2656]: I0715 23:18:46.263025 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-82jgp" podStartSLOduration=63.263010504 podStartE2EDuration="1m3.263010504s" podCreationTimestamp="2025-07-15 23:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:18:09.060888731 +0000 UTC m=+32.274146443" watchObservedRunningTime="2025-07-15 23:18:46.263010504 +0000 UTC m=+69.476268216" Jul 15 23:18:46.272697 containerd[1530]: time="2025-07-15T23:18:46.272555533Z" level=info msg="StopContainer for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" with timeout 30 (s)" Jul 15 23:18:46.274250 containerd[1530]: time="2025-07-15T23:18:46.274218265Z" level=info msg="Stop container \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" with signal terminated" Jul 15 23:18:46.290912 systemd[1]: cri-containerd-938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104.scope: Deactivated successfully. Jul 15 23:18:46.293412 containerd[1530]: time="2025-07-15T23:18:46.293372925Z" level=info msg="received exit event container_id:\"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" id:\"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" pid:3214 exited_at:{seconds:1752621526 nanos:293096003}" Jul 15 23:18:46.294224 containerd[1530]: time="2025-07-15T23:18:46.293904049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" id:\"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" pid:3214 exited_at:{seconds:1752621526 nanos:293096003}" Jul 15 23:18:46.303893 containerd[1530]: time="2025-07-15T23:18:46.303851601Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:18:46.311771 containerd[1530]: time="2025-07-15T23:18:46.311731658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" id:\"18347a5e712dc859482ee5ed2bb9095eab70b28fba57fa22a258102948e3b73e\" pid:4264 exited_at:{seconds:1752621526 nanos:311289175}" Jul 15 23:18:46.314083 containerd[1530]: time="2025-07-15T23:18:46.314051155Z" level=info msg="StopContainer for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" with timeout 2 (s)" Jul 15 23:18:46.314473 containerd[1530]: time="2025-07-15T23:18:46.314451558Z" level=info msg="Stop container \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" with signal terminated" Jul 15 23:18:46.321622 systemd-networkd[1464]: lxc_health: Link DOWN Jul 15 23:18:46.321628 systemd-networkd[1464]: lxc_health: Lost carrier Jul 15 23:18:46.336057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104-rootfs.mount: Deactivated successfully. Jul 15 23:18:46.338118 systemd[1]: cri-containerd-13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648.scope: Deactivated successfully. Jul 15 23:18:46.338356 containerd[1530]: time="2025-07-15T23:18:46.338328252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" id:\"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" pid:3323 exited_at:{seconds:1752621526 nanos:337999850}" Jul 15 23:18:46.338419 containerd[1530]: time="2025-07-15T23:18:46.338403093Z" level=info msg="received exit event container_id:\"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" id:\"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" pid:3323 exited_at:{seconds:1752621526 nanos:337999850}" Jul 15 23:18:46.339497 systemd[1]: cri-containerd-13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648.scope: Consumed 6.685s CPU time, 122.3M memory peak, 144K read from disk, 12.9M written to disk. Jul 15 23:18:46.351196 containerd[1530]: time="2025-07-15T23:18:46.351155866Z" level=info msg="StopContainer for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" returns successfully" Jul 15 23:18:46.354072 containerd[1530]: time="2025-07-15T23:18:46.354003846Z" level=info msg="StopPodSandbox for \"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\"" Jul 15 23:18:46.359505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648-rootfs.mount: Deactivated successfully. Jul 15 23:18:46.363470 containerd[1530]: time="2025-07-15T23:18:46.363347674Z" level=info msg="Container to stop \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:18:46.368454 containerd[1530]: time="2025-07-15T23:18:46.368416191Z" level=info msg="StopContainer for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" returns successfully" Jul 15 23:18:46.368975 containerd[1530]: time="2025-07-15T23:18:46.368922755Z" level=info msg="StopPodSandbox for \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\"" Jul 15 23:18:46.369047 containerd[1530]: time="2025-07-15T23:18:46.368988675Z" level=info msg="Container to stop \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:18:46.369047 containerd[1530]: time="2025-07-15T23:18:46.369002036Z" level=info msg="Container to stop \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:18:46.369047 containerd[1530]: time="2025-07-15T23:18:46.369010556Z" level=info msg="Container to stop \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:18:46.375668 containerd[1530]: time="2025-07-15T23:18:46.369018796Z" level=info msg="Container to stop \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:18:46.375668 containerd[1530]: time="2025-07-15T23:18:46.375661964Z" level=info msg="Container to stop \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:18:46.379040 systemd[1]: cri-containerd-15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3.scope: Deactivated successfully. Jul 15 23:18:46.379925 containerd[1530]: time="2025-07-15T23:18:46.379887835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" id:\"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" pid:2850 exit_status:137 exited_at:{seconds:1752621526 nanos:379488752}" Jul 15 23:18:46.383116 systemd[1]: cri-containerd-2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8.scope: Deactivated successfully. Jul 15 23:18:46.401559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8-rootfs.mount: Deactivated successfully. Jul 15 23:18:46.408408 containerd[1530]: time="2025-07-15T23:18:46.408352842Z" level=info msg="shim disconnected" id=2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8 namespace=k8s.io Jul 15 23:18:46.408712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3-rootfs.mount: Deactivated successfully. Jul 15 23:18:46.417679 containerd[1530]: time="2025-07-15T23:18:46.408395722Z" level=warning msg="cleaning up after shim disconnected" id=2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8 namespace=k8s.io Jul 15 23:18:46.418003 containerd[1530]: time="2025-07-15T23:18:46.417822191Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:18:46.418003 containerd[1530]: time="2025-07-15T23:18:46.413314878Z" level=info msg="shim disconnected" id=15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3 namespace=k8s.io Jul 15 23:18:46.418080 containerd[1530]: time="2025-07-15T23:18:46.417972672Z" level=warning msg="cleaning up after shim disconnected" id=15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3 namespace=k8s.io Jul 15 23:18:46.418080 containerd[1530]: time="2025-07-15T23:18:46.418021032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:18:46.435162 containerd[1530]: time="2025-07-15T23:18:46.435090157Z" level=info msg="received exit event sandbox_id:\"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" exit_status:137 exited_at:{seconds:1752621526 nanos:379488752}" Jul 15 23:18:46.436667 containerd[1530]: time="2025-07-15T23:18:46.436630688Z" level=info msg="TearDown network for sandbox \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" successfully" Jul 15 23:18:46.436747 containerd[1530]: time="2025-07-15T23:18:46.436675048Z" level=info msg="StopPodSandbox for \"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" returns successfully" Jul 15 23:18:46.436969 containerd[1530]: time="2025-07-15T23:18:46.435120877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" id:\"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" pid:2841 exit_status:137 exited_at:{seconds:1752621526 nanos:384446748}" Jul 15 23:18:46.437087 containerd[1530]: time="2025-07-15T23:18:46.437054011Z" level=info msg="received exit event sandbox_id:\"2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8\" exit_status:137 exited_at:{seconds:1752621526 nanos:384446748}" Jul 15 23:18:46.437410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3-shm.mount: Deactivated successfully. Jul 15 23:18:46.437517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2250b5a864a3a452bab90871ff267e28daba9390898a2c3cbb8a457d7f4824d8-shm.mount: Deactivated successfully. Jul 15 23:18:46.439187 containerd[1530]: time="2025-07-15T23:18:46.439048546Z" level=info msg="TearDown network for sandbox \"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" successfully" Jul 15 23:18:46.439187 containerd[1530]: time="2025-07-15T23:18:46.439182187Z" level=info msg="StopPodSandbox for \"15a3986017e3c1b4d786fbc074afb364805577d02de3b6f4debc4d363db0edd3\" returns successfully" Jul 15 23:18:46.523093 kubelet[2656]: I0715 23:18:46.522155 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-net\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523093 kubelet[2656]: I0715 23:18:46.522224 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cni-path\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523093 kubelet[2656]: I0715 23:18:46.522249 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsbfx\" (UniqueName: \"kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-kube-api-access-nsbfx\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523093 kubelet[2656]: I0715 23:18:46.522271 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-hostproc\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523093 kubelet[2656]: I0715 23:18:46.522290 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a9a293-68c2-4917-a27a-6efcaa873138-cilium-config-path\") pod \"58a9a293-68c2-4917-a27a-6efcaa873138\" (UID: \"58a9a293-68c2-4917-a27a-6efcaa873138\") " Jul 15 23:18:46.523093 kubelet[2656]: I0715 23:18:46.522304 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-xtables-lock\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523347 kubelet[2656]: I0715 23:18:46.522317 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-cgroup\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523347 kubelet[2656]: I0715 23:18:46.522330 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-etc-cni-netd\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523347 kubelet[2656]: I0715 23:18:46.522344 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-bpf-maps\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523347 kubelet[2656]: I0715 23:18:46.522358 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-lib-modules\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523347 kubelet[2656]: I0715 23:18:46.522373 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-hubble-tls\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523347 kubelet[2656]: I0715 23:18:46.522388 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/620f76f2-06e8-402a-821a-100b435ef955-clustermesh-secrets\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523644 kubelet[2656]: I0715 23:18:46.522407 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-kernel\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523644 kubelet[2656]: I0715 23:18:46.522423 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-run\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.523644 kubelet[2656]: I0715 23:18:46.522441 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbvhg\" (UniqueName: \"kubernetes.io/projected/58a9a293-68c2-4917-a27a-6efcaa873138-kube-api-access-tbvhg\") pod \"58a9a293-68c2-4917-a27a-6efcaa873138\" (UID: \"58a9a293-68c2-4917-a27a-6efcaa873138\") " Jul 15 23:18:46.523644 kubelet[2656]: I0715 23:18:46.522461 2656 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/620f76f2-06e8-402a-821a-100b435ef955-cilium-config-path\") pod \"620f76f2-06e8-402a-821a-100b435ef955\" (UID: \"620f76f2-06e8-402a-821a-100b435ef955\") " Jul 15 23:18:46.526771 kubelet[2656]: I0715 23:18:46.526434 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.526771 kubelet[2656]: I0715 23:18:46.526454 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-hostproc" (OuterVolumeSpecName: "hostproc") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.526771 kubelet[2656]: I0715 23:18:46.526527 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cni-path" (OuterVolumeSpecName: "cni-path") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.526771 kubelet[2656]: I0715 23:18:46.526550 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.528571 kubelet[2656]: I0715 23:18:46.528527 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.528705 kubelet[2656]: I0715 23:18:46.528690 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.535616 kubelet[2656]: I0715 23:18:46.531029 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a9a293-68c2-4917-a27a-6efcaa873138-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "58a9a293-68c2-4917-a27a-6efcaa873138" (UID: "58a9a293-68c2-4917-a27a-6efcaa873138"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:18:46.535616 kubelet[2656]: I0715 23:18:46.531100 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.535616 kubelet[2656]: I0715 23:18:46.531118 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.535616 kubelet[2656]: I0715 23:18:46.531135 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.535616 kubelet[2656]: I0715 23:18:46.531148 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:18:46.538240 kubelet[2656]: I0715 23:18:46.531876 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/620f76f2-06e8-402a-821a-100b435ef955-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:18:46.538928 kubelet[2656]: I0715 23:18:46.538888 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a9a293-68c2-4917-a27a-6efcaa873138-kube-api-access-tbvhg" (OuterVolumeSpecName: "kube-api-access-tbvhg") pod "58a9a293-68c2-4917-a27a-6efcaa873138" (UID: "58a9a293-68c2-4917-a27a-6efcaa873138"). InnerVolumeSpecName "kube-api-access-tbvhg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:18:46.539010 kubelet[2656]: I0715 23:18:46.538929 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:18:46.539010 kubelet[2656]: I0715 23:18:46.538944 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-kube-api-access-nsbfx" (OuterVolumeSpecName: "kube-api-access-nsbfx") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "kube-api-access-nsbfx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:18:46.539133 kubelet[2656]: I0715 23:18:46.539109 2656 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/620f76f2-06e8-402a-821a-100b435ef955-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "620f76f2-06e8-402a-821a-100b435ef955" (UID: "620f76f2-06e8-402a-821a-100b435ef955"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:18:46.623463 kubelet[2656]: I0715 23:18:46.623427 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/620f76f2-06e8-402a-821a-100b435ef955-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623608 2656 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623623 2656 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623632 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nsbfx\" (UniqueName: \"kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-kube-api-access-nsbfx\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623642 2656 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623651 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58a9a293-68c2-4917-a27a-6efcaa873138-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623659 2656 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623666 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623745 kubelet[2656]: I0715 23:18:46.623673 2656 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623682 2656 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623689 2656 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623696 2656 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/620f76f2-06e8-402a-821a-100b435ef955-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623703 2656 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/620f76f2-06e8-402a-821a-100b435ef955-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623711 2656 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623720 2656 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/620f76f2-06e8-402a-821a-100b435ef955-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.623916 kubelet[2656]: I0715 23:18:46.623727 2656 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tbvhg\" (UniqueName: \"kubernetes.io/projected/58a9a293-68c2-4917-a27a-6efcaa873138-kube-api-access-tbvhg\") on node \"localhost\" DevicePath \"\"" Jul 15 23:18:46.909917 systemd[1]: Removed slice kubepods-burstable-pod620f76f2_06e8_402a_821a_100b435ef955.slice - libcontainer container kubepods-burstable-pod620f76f2_06e8_402a_821a_100b435ef955.slice. Jul 15 23:18:46.910019 systemd[1]: kubepods-burstable-pod620f76f2_06e8_402a_821a_100b435ef955.slice: Consumed 6.906s CPU time, 122.6M memory peak, 148K read from disk, 12.9M written to disk. Jul 15 23:18:46.910915 systemd[1]: Removed slice kubepods-besteffort-pod58a9a293_68c2_4917_a27a_6efcaa873138.slice - libcontainer container kubepods-besteffort-pod58a9a293_68c2_4917_a27a_6efcaa873138.slice. Jul 15 23:18:46.957742 kubelet[2656]: E0715 23:18:46.957665 2656 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:18:47.097826 kubelet[2656]: I0715 23:18:47.097460 2656 scope.go:117] "RemoveContainer" containerID="938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104" Jul 15 23:18:47.106138 containerd[1530]: time="2025-07-15T23:18:47.106057539Z" level=info msg="RemoveContainer for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\"" Jul 15 23:18:47.134538 containerd[1530]: time="2025-07-15T23:18:47.134461059Z" level=info msg="RemoveContainer for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" returns successfully" Jul 15 23:18:47.137277 kubelet[2656]: I0715 23:18:47.137235 2656 scope.go:117] "RemoveContainer" containerID="938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104" Jul 15 23:18:47.137555 containerd[1530]: time="2025-07-15T23:18:47.137506881Z" level=error msg="ContainerStatus for \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\": not found" Jul 15 23:18:47.143960 kubelet[2656]: E0715 23:18:47.143759 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\": not found" containerID="938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104" Jul 15 23:18:47.143960 kubelet[2656]: I0715 23:18:47.143813 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104"} err="failed to get container status \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\": rpc error: code = NotFound desc = an error occurred when try to find container \"938f786f7929ca601c412e52d4347cf330f188c8cd8be446900e385f49990104\": not found" Jul 15 23:18:47.143960 kubelet[2656]: I0715 23:18:47.143852 2656 scope.go:117] "RemoveContainer" containerID="13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648" Jul 15 23:18:47.145614 containerd[1530]: time="2025-07-15T23:18:47.145578297Z" level=info msg="RemoveContainer for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\"" Jul 15 23:18:47.149951 containerd[1530]: time="2025-07-15T23:18:47.149895208Z" level=info msg="RemoveContainer for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" returns successfully" Jul 15 23:18:47.150244 kubelet[2656]: I0715 23:18:47.150125 2656 scope.go:117] "RemoveContainer" containerID="dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317" Jul 15 23:18:47.151619 containerd[1530]: time="2025-07-15T23:18:47.151577580Z" level=info msg="RemoveContainer for \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\"" Jul 15 23:18:47.155066 containerd[1530]: time="2025-07-15T23:18:47.155032244Z" level=info msg="RemoveContainer for \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" returns successfully" Jul 15 23:18:47.155318 kubelet[2656]: I0715 23:18:47.155299 2656 scope.go:117] "RemoveContainer" containerID="33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb" Jul 15 23:18:47.157562 containerd[1530]: time="2025-07-15T23:18:47.157531382Z" level=info msg="RemoveContainer for \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\"" Jul 15 23:18:47.160742 containerd[1530]: time="2025-07-15T23:18:47.160705364Z" level=info msg="RemoveContainer for \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" returns successfully" Jul 15 23:18:47.160961 kubelet[2656]: I0715 23:18:47.160941 2656 scope.go:117] "RemoveContainer" containerID="5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a" Jul 15 23:18:47.162511 containerd[1530]: time="2025-07-15T23:18:47.162431016Z" level=info msg="RemoveContainer for \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\"" Jul 15 23:18:47.165170 containerd[1530]: time="2025-07-15T23:18:47.165116075Z" level=info msg="RemoveContainer for \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" returns successfully" Jul 15 23:18:47.165429 kubelet[2656]: I0715 23:18:47.165320 2656 scope.go:117] "RemoveContainer" containerID="879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16" Jul 15 23:18:47.166991 containerd[1530]: time="2025-07-15T23:18:47.166948168Z" level=info msg="RemoveContainer for \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\"" Jul 15 23:18:47.169344 containerd[1530]: time="2025-07-15T23:18:47.169316025Z" level=info msg="RemoveContainer for \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" returns successfully" Jul 15 23:18:47.169574 kubelet[2656]: I0715 23:18:47.169464 2656 scope.go:117] "RemoveContainer" containerID="13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648" Jul 15 23:18:47.169781 containerd[1530]: time="2025-07-15T23:18:47.169736588Z" level=error msg="ContainerStatus for \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\": not found" Jul 15 23:18:47.170039 kubelet[2656]: E0715 23:18:47.169883 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\": not found" containerID="13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648" Jul 15 23:18:47.170039 kubelet[2656]: I0715 23:18:47.169912 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648"} err="failed to get container status \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\": rpc error: code = NotFound desc = an error occurred when try to find container \"13007320f7400f897b0d4d87f0a7cf96326ea833af9acf04a981825aa9a10648\": not found" Jul 15 23:18:47.170039 kubelet[2656]: I0715 23:18:47.169933 2656 scope.go:117] "RemoveContainer" containerID="dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317" Jul 15 23:18:47.170128 containerd[1530]: time="2025-07-15T23:18:47.170081270Z" level=error msg="ContainerStatus for \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\": not found" Jul 15 23:18:47.170250 kubelet[2656]: E0715 23:18:47.170191 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\": not found" containerID="dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317" Jul 15 23:18:47.170289 kubelet[2656]: I0715 23:18:47.170274 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317"} err="failed to get container status \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc861d00cf13b55bb788e0decefc0f120ff65d29c3bad0b722437cce18cbb317\": not found" Jul 15 23:18:47.170317 kubelet[2656]: I0715 23:18:47.170293 2656 scope.go:117] "RemoveContainer" containerID="33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb" Jul 15 23:18:47.170483 containerd[1530]: time="2025-07-15T23:18:47.170451673Z" level=error msg="ContainerStatus for \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\": not found" Jul 15 23:18:47.170699 kubelet[2656]: E0715 23:18:47.170559 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\": not found" containerID="33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb" Jul 15 23:18:47.170699 kubelet[2656]: I0715 23:18:47.170580 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb"} err="failed to get container status \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"33aa14f11be4f5e520320073cdd67dcb7333eefcebbee935901d220bbabd42eb\": not found" Jul 15 23:18:47.170699 kubelet[2656]: I0715 23:18:47.170600 2656 scope.go:117] "RemoveContainer" containerID="5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a" Jul 15 23:18:47.170779 containerd[1530]: time="2025-07-15T23:18:47.170723715Z" level=error msg="ContainerStatus for \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\": not found" Jul 15 23:18:47.170848 kubelet[2656]: E0715 23:18:47.170826 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\": not found" containerID="5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a" Jul 15 23:18:47.170879 kubelet[2656]: I0715 23:18:47.170848 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a"} err="failed to get container status \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f5e49cdab786c53848c70bea1adf451b3006ce45423abd6ee64a32f2ed53c8a\": not found" Jul 15 23:18:47.170879 kubelet[2656]: I0715 23:18:47.170863 2656 scope.go:117] "RemoveContainer" containerID="879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16" Jul 15 23:18:47.171037 containerd[1530]: time="2025-07-15T23:18:47.171012197Z" level=error msg="ContainerStatus for \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\": not found" Jul 15 23:18:47.171146 kubelet[2656]: E0715 23:18:47.171129 2656 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\": not found" containerID="879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16" Jul 15 23:18:47.171179 kubelet[2656]: I0715 23:18:47.171152 2656 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16"} err="failed to get container status \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\": rpc error: code = NotFound desc = an error occurred when try to find container \"879dc6b8e05f9754a98e84cad1cce3cde31af9650f58c2e248e5cb08692e6d16\": not found" Jul 15 23:18:47.334676 systemd[1]: var-lib-kubelet-pods-620f76f2\x2d06e8\x2d402a\x2d821a\x2d100b435ef955-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnsbfx.mount: Deactivated successfully. Jul 15 23:18:47.334770 systemd[1]: var-lib-kubelet-pods-58a9a293\x2d68c2\x2d4917\x2da27a\x2d6efcaa873138-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtbvhg.mount: Deactivated successfully. Jul 15 23:18:47.334819 systemd[1]: var-lib-kubelet-pods-620f76f2\x2d06e8\x2d402a\x2d821a\x2d100b435ef955-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 23:18:47.334864 systemd[1]: var-lib-kubelet-pods-620f76f2\x2d06e8\x2d402a\x2d821a\x2d100b435ef955-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 23:18:48.237904 sshd[4238]: Connection closed by 10.0.0.1 port 42366 Jul 15 23:18:48.238380 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:48.248937 systemd[1]: sshd@21-10.0.0.66:22-10.0.0.1:42366.service: Deactivated successfully. Jul 15 23:18:48.250836 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:18:48.251713 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:18:48.255161 systemd[1]: Started sshd@22-10.0.0.66:22-10.0.0.1:42378.service - OpenSSH per-connection server daemon (10.0.0.1:42378). Jul 15 23:18:48.256290 systemd-logind[1510]: Removed session 22. Jul 15 23:18:48.313945 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 42378 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:48.315406 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:48.319456 systemd-logind[1510]: New session 23 of user core. Jul 15 23:18:48.330377 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:18:48.613036 kubelet[2656]: I0715 23:18:48.612889 2656 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T23:18:48Z","lastTransitionTime":"2025-07-15T23:18:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 23:18:48.906532 kubelet[2656]: I0715 23:18:48.906376 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a9a293-68c2-4917-a27a-6efcaa873138" path="/var/lib/kubelet/pods/58a9a293-68c2-4917-a27a-6efcaa873138/volumes" Jul 15 23:18:48.906807 kubelet[2656]: I0715 23:18:48.906775 2656 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="620f76f2-06e8-402a-821a-100b435ef955" path="/var/lib/kubelet/pods/620f76f2-06e8-402a-821a-100b435ef955/volumes" Jul 15 23:18:49.225261 sshd[4392]: Connection closed by 10.0.0.1 port 42378 Jul 15 23:18:49.225783 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:49.234764 systemd[1]: sshd@22-10.0.0.66:22-10.0.0.1:42378.service: Deactivated successfully. Jul 15 23:18:49.237093 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:18:49.238150 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:18:49.242727 systemd[1]: Started sshd@23-10.0.0.66:22-10.0.0.1:42392.service - OpenSSH per-connection server daemon (10.0.0.1:42392). Jul 15 23:18:49.243849 systemd-logind[1510]: Removed session 23. Jul 15 23:18:49.264689 systemd[1]: Created slice kubepods-burstable-pod939f5f54_f880_4f09_99b9_fd7cfb5618ff.slice - libcontainer container kubepods-burstable-pod939f5f54_f880_4f09_99b9_fd7cfb5618ff.slice. Jul 15 23:18:49.321056 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 42392 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:49.322423 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:49.326083 systemd-logind[1510]: New session 24 of user core. Jul 15 23:18:49.332368 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:18:49.340515 kubelet[2656]: I0715 23:18:49.340360 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-cni-path\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340515 kubelet[2656]: I0715 23:18:49.340401 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-host-proc-sys-kernel\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340515 kubelet[2656]: I0715 23:18:49.340418 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/939f5f54-f880-4f09-99b9-fd7cfb5618ff-hubble-tls\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340515 kubelet[2656]: I0715 23:18:49.340433 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-cilium-run\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340515 kubelet[2656]: I0715 23:18:49.340449 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/939f5f54-f880-4f09-99b9-fd7cfb5618ff-cilium-config-path\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340515 kubelet[2656]: I0715 23:18:49.340464 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-hostproc\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340724 kubelet[2656]: I0715 23:18:49.340480 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/939f5f54-f880-4f09-99b9-fd7cfb5618ff-clustermesh-secrets\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340724 kubelet[2656]: I0715 23:18:49.340499 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-bpf-maps\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340934 kubelet[2656]: I0715 23:18:49.340805 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-etc-cni-netd\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340934 kubelet[2656]: I0715 23:18:49.340861 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/939f5f54-f880-4f09-99b9-fd7cfb5618ff-cilium-ipsec-secrets\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340934 kubelet[2656]: I0715 23:18:49.340895 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-cilium-cgroup\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.340934 kubelet[2656]: I0715 23:18:49.340914 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-xtables-lock\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.341101 kubelet[2656]: I0715 23:18:49.341075 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-host-proc-sys-net\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.341237 kubelet[2656]: I0715 23:18:49.341159 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gcfl\" (UniqueName: \"kubernetes.io/projected/939f5f54-f880-4f09-99b9-fd7cfb5618ff-kube-api-access-9gcfl\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.341237 kubelet[2656]: I0715 23:18:49.341180 2656 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/939f5f54-f880-4f09-99b9-fd7cfb5618ff-lib-modules\") pod \"cilium-25ndq\" (UID: \"939f5f54-f880-4f09-99b9-fd7cfb5618ff\") " pod="kube-system/cilium-25ndq" Jul 15 23:18:49.380161 sshd[4407]: Connection closed by 10.0.0.1 port 42392 Jul 15 23:18:49.380620 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:49.391366 systemd[1]: sshd@23-10.0.0.66:22-10.0.0.1:42392.service: Deactivated successfully. Jul 15 23:18:49.393742 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:18:49.394448 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:18:49.397009 systemd[1]: Started sshd@24-10.0.0.66:22-10.0.0.1:42404.service - OpenSSH per-connection server daemon (10.0.0.1:42404). Jul 15 23:18:49.398072 systemd-logind[1510]: Removed session 24. Jul 15 23:18:49.448063 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 42404 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:49.449915 sshd-session[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:49.462647 systemd-logind[1510]: New session 25 of user core. Jul 15 23:18:49.474386 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:18:49.571579 kubelet[2656]: E0715 23:18:49.571475 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:49.572472 containerd[1530]: time="2025-07-15T23:18:49.571964306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25ndq,Uid:939f5f54-f880-4f09-99b9-fd7cfb5618ff,Namespace:kube-system,Attempt:0,}" Jul 15 23:18:49.600084 containerd[1530]: time="2025-07-15T23:18:49.599803530Z" level=info msg="connecting to shim 26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60" address="unix:///run/containerd/s/e0187599ed652e10a0a08b4f56e6babcfe2123e76c0fde904d6b5fad7221f880" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:18:49.628401 systemd[1]: Started cri-containerd-26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60.scope - libcontainer container 26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60. Jul 15 23:18:49.654985 containerd[1530]: time="2025-07-15T23:18:49.654846654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25ndq,Uid:939f5f54-f880-4f09-99b9-fd7cfb5618ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\"" Jul 15 23:18:49.655624 kubelet[2656]: E0715 23:18:49.655599 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:49.662911 containerd[1530]: time="2025-07-15T23:18:49.662863347Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:18:49.669739 containerd[1530]: time="2025-07-15T23:18:49.669701953Z" level=info msg="Container 6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:49.679029 containerd[1530]: time="2025-07-15T23:18:49.678948094Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\"" Jul 15 23:18:49.679556 containerd[1530]: time="2025-07-15T23:18:49.679523978Z" level=info msg="StartContainer for \"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\"" Jul 15 23:18:49.680745 containerd[1530]: time="2025-07-15T23:18:49.680575345Z" level=info msg="connecting to shim 6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95" address="unix:///run/containerd/s/e0187599ed652e10a0a08b4f56e6babcfe2123e76c0fde904d6b5fad7221f880" protocol=ttrpc version=3 Jul 15 23:18:49.719433 systemd[1]: Started cri-containerd-6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95.scope - libcontainer container 6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95. Jul 15 23:18:49.747915 containerd[1530]: time="2025-07-15T23:18:49.747874390Z" level=info msg="StartContainer for \"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\" returns successfully" Jul 15 23:18:49.767896 systemd[1]: cri-containerd-6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95.scope: Deactivated successfully. Jul 15 23:18:49.770624 containerd[1530]: time="2025-07-15T23:18:49.770586181Z" level=info msg="received exit event container_id:\"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\" id:\"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\" pid:4484 exited_at:{seconds:1752621529 nanos:770324339}" Jul 15 23:18:49.770840 containerd[1530]: time="2025-07-15T23:18:49.770723702Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\" id:\"6cb930689291693ee68cc9c4f413f0ed336fa2b1c61180a14574496f3da5ab95\" pid:4484 exited_at:{seconds:1752621529 nanos:770324339}" Jul 15 23:18:50.115964 kubelet[2656]: E0715 23:18:50.115817 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:50.199223 containerd[1530]: time="2025-07-15T23:18:50.199165697Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:18:50.206258 containerd[1530]: time="2025-07-15T23:18:50.206176742Z" level=info msg="Container b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:50.210830 containerd[1530]: time="2025-07-15T23:18:50.210786771Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\"" Jul 15 23:18:50.211503 containerd[1530]: time="2025-07-15T23:18:50.211379935Z" level=info msg="StartContainer for \"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\"" Jul 15 23:18:50.213725 containerd[1530]: time="2025-07-15T23:18:50.213681510Z" level=info msg="connecting to shim b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1" address="unix:///run/containerd/s/e0187599ed652e10a0a08b4f56e6babcfe2123e76c0fde904d6b5fad7221f880" protocol=ttrpc version=3 Jul 15 23:18:50.241426 systemd[1]: Started cri-containerd-b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1.scope - libcontainer container b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1. Jul 15 23:18:50.271237 containerd[1530]: time="2025-07-15T23:18:50.270512914Z" level=info msg="StartContainer for \"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\" returns successfully" Jul 15 23:18:50.281241 systemd[1]: cri-containerd-b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1.scope: Deactivated successfully. Jul 15 23:18:50.281837 containerd[1530]: time="2025-07-15T23:18:50.281718866Z" level=info msg="received exit event container_id:\"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\" id:\"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\" pid:4531 exited_at:{seconds:1752621530 nanos:281419824}" Jul 15 23:18:50.281837 containerd[1530]: time="2025-07-15T23:18:50.281812667Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\" id:\"b872e279c5899c42609c80f5ee1b35b12ed273915dc9ba2b5fb481a8b5b6c0a1\" pid:4531 exited_at:{seconds:1752621530 nanos:281419824}" Jul 15 23:18:51.121428 kubelet[2656]: E0715 23:18:51.121296 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:51.153661 containerd[1530]: time="2025-07-15T23:18:51.153368386Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:18:51.179504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807603627.mount: Deactivated successfully. Jul 15 23:18:51.180308 containerd[1530]: time="2025-07-15T23:18:51.179727270Z" level=info msg="Container 467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:51.190533 containerd[1530]: time="2025-07-15T23:18:51.190483976Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\"" Jul 15 23:18:51.192245 containerd[1530]: time="2025-07-15T23:18:51.191376782Z" level=info msg="StartContainer for \"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\"" Jul 15 23:18:51.192926 containerd[1530]: time="2025-07-15T23:18:51.192876951Z" level=info msg="connecting to shim 467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72" address="unix:///run/containerd/s/e0187599ed652e10a0a08b4f56e6babcfe2123e76c0fde904d6b5fad7221f880" protocol=ttrpc version=3 Jul 15 23:18:51.221404 systemd[1]: Started cri-containerd-467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72.scope - libcontainer container 467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72. Jul 15 23:18:51.254588 systemd[1]: cri-containerd-467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72.scope: Deactivated successfully. Jul 15 23:18:51.256697 containerd[1530]: time="2025-07-15T23:18:51.256646908Z" level=info msg="received exit event container_id:\"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\" id:\"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\" pid:4575 exited_at:{seconds:1752621531 nanos:255155738}" Jul 15 23:18:51.256800 containerd[1530]: time="2025-07-15T23:18:51.256732348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\" id:\"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\" pid:4575 exited_at:{seconds:1752621531 nanos:255155738}" Jul 15 23:18:51.256800 containerd[1530]: time="2025-07-15T23:18:51.256673268Z" level=info msg="StartContainer for \"467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72\" returns successfully" Jul 15 23:18:51.447820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-467ed39516a82f38975aa902fdb64508b9fc5ea2f0647c6d08f8416d66808e72-rootfs.mount: Deactivated successfully. Jul 15 23:18:51.903671 kubelet[2656]: E0715 23:18:51.903585 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:51.958766 kubelet[2656]: E0715 23:18:51.958721 2656 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:18:52.126425 kubelet[2656]: E0715 23:18:52.126369 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:52.131192 containerd[1530]: time="2025-07-15T23:18:52.131155116Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:18:52.138193 containerd[1530]: time="2025-07-15T23:18:52.138151598Z" level=info msg="Container 3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:52.148647 containerd[1530]: time="2025-07-15T23:18:52.148594541Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\"" Jul 15 23:18:52.149183 containerd[1530]: time="2025-07-15T23:18:52.149155064Z" level=info msg="StartContainer for \"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\"" Jul 15 23:18:52.150852 containerd[1530]: time="2025-07-15T23:18:52.150811114Z" level=info msg="connecting to shim 3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe" address="unix:///run/containerd/s/e0187599ed652e10a0a08b4f56e6babcfe2123e76c0fde904d6b5fad7221f880" protocol=ttrpc version=3 Jul 15 23:18:52.175513 systemd[1]: Started cri-containerd-3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe.scope - libcontainer container 3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe. Jul 15 23:18:52.198253 systemd[1]: cri-containerd-3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe.scope: Deactivated successfully. Jul 15 23:18:52.199751 containerd[1530]: time="2025-07-15T23:18:52.199720808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\" id:\"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\" pid:4613 exited_at:{seconds:1752621532 nanos:198959604}" Jul 15 23:18:52.201478 containerd[1530]: time="2025-07-15T23:18:52.201366658Z" level=info msg="received exit event container_id:\"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\" id:\"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\" pid:4613 exited_at:{seconds:1752621532 nanos:198959604}" Jul 15 23:18:52.202122 containerd[1530]: time="2025-07-15T23:18:52.202096463Z" level=info msg="StartContainer for \"3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe\" returns successfully" Jul 15 23:18:52.219323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3676a2b6568460e2a9df11472ac082c435fa8a1bdcfdd06a3a98b812931cafbe-rootfs.mount: Deactivated successfully. Jul 15 23:18:53.130754 kubelet[2656]: E0715 23:18:53.130686 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:53.136627 containerd[1530]: time="2025-07-15T23:18:53.136555181Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:18:53.144939 containerd[1530]: time="2025-07-15T23:18:53.144758989Z" level=info msg="Container 04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:18:53.148827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1288032674.mount: Deactivated successfully. Jul 15 23:18:53.155519 containerd[1530]: time="2025-07-15T23:18:53.155431451Z" level=info msg="CreateContainer within sandbox \"26a32a51ea2a0fa299b540b135e99b4be5aa5cc03b03825a3533d7f954e33b60\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\"" Jul 15 23:18:53.155889 containerd[1530]: time="2025-07-15T23:18:53.155865534Z" level=info msg="StartContainer for \"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\"" Jul 15 23:18:53.156767 containerd[1530]: time="2025-07-15T23:18:53.156741019Z" level=info msg="connecting to shim 04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5" address="unix:///run/containerd/s/e0187599ed652e10a0a08b4f56e6babcfe2123e76c0fde904d6b5fad7221f880" protocol=ttrpc version=3 Jul 15 23:18:53.183434 systemd[1]: Started cri-containerd-04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5.scope - libcontainer container 04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5. Jul 15 23:18:53.210910 containerd[1530]: time="2025-07-15T23:18:53.210874615Z" level=info msg="StartContainer for \"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\" returns successfully" Jul 15 23:18:53.264349 containerd[1530]: time="2025-07-15T23:18:53.264299646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\" id:\"f20d0267252949aaa2b64101b48be0676dee563d12441c6aa130d4d581527a7a\" pid:4681 exited_at:{seconds:1752621533 nanos:264039725}" Jul 15 23:18:53.472653 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 15 23:18:54.136537 kubelet[2656]: E0715 23:18:54.136498 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:55.573477 kubelet[2656]: E0715 23:18:55.573438 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:55.831685 containerd[1530]: time="2025-07-15T23:18:55.831558575Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\" id:\"ccaa7a7ea039e21fc8903c429cd24a5d9b9120e2ef55fe9b50a2f2bd6f088b0b\" pid:5067 exit_status:1 exited_at:{seconds:1752621535 nanos:830809531}" Jul 15 23:18:55.903453 kubelet[2656]: E0715 23:18:55.903418 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:56.404419 systemd-networkd[1464]: lxc_health: Link UP Jul 15 23:18:56.406091 systemd-networkd[1464]: lxc_health: Gained carrier Jul 15 23:18:57.577005 kubelet[2656]: E0715 23:18:57.576582 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:57.600296 kubelet[2656]: I0715 23:18:57.600229 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-25ndq" podStartSLOduration=8.6001982 podStartE2EDuration="8.6001982s" podCreationTimestamp="2025-07-15 23:18:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:18:54.150645787 +0000 UTC m=+77.363903579" watchObservedRunningTime="2025-07-15 23:18:57.6001982 +0000 UTC m=+80.813455912" Jul 15 23:18:57.948421 containerd[1530]: time="2025-07-15T23:18:57.948067707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\" id:\"243a6d20673fd27a2442816b9f4c2f4ae33dfa01357da8e1fea1fc99fd700f72\" pid:5221 exited_at:{seconds:1752621537 nanos:947441064}" Jul 15 23:18:58.136416 systemd-networkd[1464]: lxc_health: Gained IPv6LL Jul 15 23:18:58.144598 kubelet[2656]: E0715 23:18:58.144574 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:18:59.903977 kubelet[2656]: E0715 23:18:59.903925 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:00.092637 containerd[1530]: time="2025-07-15T23:19:00.092572400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\" id:\"2fc705cea6f1e4df30eeddbb6b4054b64525fce513671a793e05b1aa9191e9b0\" pid:5254 exited_at:{seconds:1752621540 nanos:92214599}" Jul 15 23:19:02.203309 containerd[1530]: time="2025-07-15T23:19:02.203259011Z" level=info msg="TaskExit event in podsandbox handler container_id:\"04e6bc0d21e98b0fa36d96f811becffa077f2b89094556ba5938a53f1acc87a5\" id:\"d4f736c0d56712a0ebdd4666d91433b5b2a2febe2e7766f8930e5444991eed7c\" pid:5280 exited_at:{seconds:1752621542 nanos:202502527}" Jul 15 23:19:02.223720 sshd[4420]: Connection closed by 10.0.0.1 port 42404 Jul 15 23:19:02.221747 sshd-session[4414]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:02.226199 systemd[1]: sshd@24-10.0.0.66:22-10.0.0.1:42404.service: Deactivated successfully. Jul 15 23:19:02.228110 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:19:02.229594 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:19:02.230886 systemd-logind[1510]: Removed session 25.