Jul 15 23:18:30.826608 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 23:18:30.826629 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 22:00:45 -00 2025 Jul 15 23:18:30.826639 kernel: KASLR enabled Jul 15 23:18:30.826644 kernel: efi: EFI v2.7 by EDK II Jul 15 23:18:30.826650 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 15 23:18:30.826655 kernel: random: crng init done Jul 15 23:18:30.826662 kernel: secureboot: Secure boot disabled Jul 15 23:18:30.826667 kernel: ACPI: Early table checksum verification disabled Jul 15 23:18:30.826673 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 15 23:18:30.826680 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 23:18:30.826686 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826692 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826697 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826703 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826710 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826717 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826723 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826729 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826735 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 23:18:30.826741 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 23:18:30.826747 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 23:18:30.826753 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:18:30.826759 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 15 23:18:30.826765 kernel: Zone ranges: Jul 15 23:18:30.826771 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:18:30.826778 kernel: DMA32 empty Jul 15 23:18:30.826784 kernel: Normal empty Jul 15 23:18:30.826790 kernel: Device empty Jul 15 23:18:30.826796 kernel: Movable zone start for each node Jul 15 23:18:30.826802 kernel: Early memory node ranges Jul 15 23:18:30.826808 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 15 23:18:30.826814 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 15 23:18:30.826820 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 15 23:18:30.826826 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 15 23:18:30.826854 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 15 23:18:30.826873 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 15 23:18:30.826879 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 15 23:18:30.826887 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 15 23:18:30.826893 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 15 23:18:30.826899 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 15 23:18:30.826907 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 15 23:18:30.826914 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 15 23:18:30.826921 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 23:18:30.826928 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 23:18:30.826935 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 23:18:30.826941 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 15 23:18:30.826948 kernel: psci: probing for conduit method from ACPI. Jul 15 23:18:30.826954 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 23:18:30.826960 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 23:18:30.826967 kernel: psci: Trusted OS migration not required Jul 15 23:18:30.826973 kernel: psci: SMC Calling Convention v1.1 Jul 15 23:18:30.826980 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 23:18:30.826986 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 23:18:30.826994 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 23:18:30.827001 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 23:18:30.827007 kernel: Detected PIPT I-cache on CPU0 Jul 15 23:18:30.827013 kernel: CPU features: detected: GIC system register CPU interface Jul 15 23:18:30.827020 kernel: CPU features: detected: Spectre-v4 Jul 15 23:18:30.827026 kernel: CPU features: detected: Spectre-BHB Jul 15 23:18:30.827032 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 23:18:30.827039 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 23:18:30.827045 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 23:18:30.827052 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 23:18:30.827058 kernel: alternatives: applying boot alternatives Jul 15 23:18:30.827066 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:18:30.827075 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 23:18:30.827081 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 23:18:30.827088 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 23:18:30.827095 kernel: Fallback order for Node 0: 0 Jul 15 23:18:30.827101 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 23:18:30.827108 kernel: Policy zone: DMA Jul 15 23:18:30.827114 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 23:18:30.827121 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 23:18:30.827127 kernel: software IO TLB: area num 4. Jul 15 23:18:30.827134 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 23:18:30.827140 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 15 23:18:30.827149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 23:18:30.827156 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 23:18:30.827163 kernel: rcu: RCU event tracing is enabled. Jul 15 23:18:30.827169 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 23:18:30.827176 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 23:18:30.827182 kernel: Tracing variant of Tasks RCU enabled. Jul 15 23:18:30.827189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 23:18:30.827195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 23:18:30.827202 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:18:30.827208 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 23:18:30.827215 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 23:18:30.827223 kernel: GICv3: 256 SPIs implemented Jul 15 23:18:30.827230 kernel: GICv3: 0 Extended SPIs implemented Jul 15 23:18:30.827236 kernel: Root IRQ handler: gic_handle_irq Jul 15 23:18:30.827248 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 23:18:30.827254 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 23:18:30.827261 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 23:18:30.827267 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 23:18:30.827274 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 23:18:30.827280 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 23:18:30.827287 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 23:18:30.827293 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 23:18:30.827300 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 23:18:30.827308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:18:30.827314 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 23:18:30.827321 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 23:18:30.827327 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 23:18:30.827334 kernel: arm-pv: using stolen time PV Jul 15 23:18:30.827340 kernel: Console: colour dummy device 80x25 Jul 15 23:18:30.827347 kernel: ACPI: Core revision 20240827 Jul 15 23:18:30.827354 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 23:18:30.827360 kernel: pid_max: default: 32768 minimum: 301 Jul 15 23:18:30.827367 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 23:18:30.827375 kernel: landlock: Up and running. Jul 15 23:18:30.827381 kernel: SELinux: Initializing. Jul 15 23:18:30.827388 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:18:30.827395 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 23:18:30.827401 kernel: rcu: Hierarchical SRCU implementation. Jul 15 23:18:30.827408 kernel: rcu: Max phase no-delay instances is 400. Jul 15 23:18:30.827415 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 23:18:30.827421 kernel: Remapping and enabling EFI services. Jul 15 23:18:30.827428 kernel: smp: Bringing up secondary CPUs ... Jul 15 23:18:30.827440 kernel: Detected PIPT I-cache on CPU1 Jul 15 23:18:30.827447 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 23:18:30.827454 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 23:18:30.827462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:18:30.827469 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 23:18:30.827476 kernel: Detected PIPT I-cache on CPU2 Jul 15 23:18:30.827483 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 23:18:30.827490 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 23:18:30.827498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:18:30.827505 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 23:18:30.827512 kernel: Detected PIPT I-cache on CPU3 Jul 15 23:18:30.827519 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 23:18:30.827526 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 23:18:30.827533 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 23:18:30.827540 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 23:18:30.827547 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 23:18:30.827553 kernel: SMP: Total of 4 processors activated. Jul 15 23:18:30.827562 kernel: CPU: All CPU(s) started at EL1 Jul 15 23:18:30.827568 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 23:18:30.827575 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 23:18:30.827582 kernel: CPU features: detected: Common not Private translations Jul 15 23:18:30.827589 kernel: CPU features: detected: CRC32 instructions Jul 15 23:18:30.827596 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 23:18:30.827603 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 23:18:30.827610 kernel: CPU features: detected: LSE atomic instructions Jul 15 23:18:30.827617 kernel: CPU features: detected: Privileged Access Never Jul 15 23:18:30.827625 kernel: CPU features: detected: RAS Extension Support Jul 15 23:18:30.827632 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 23:18:30.827639 kernel: alternatives: applying system-wide alternatives Jul 15 23:18:30.827645 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 23:18:30.827653 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Jul 15 23:18:30.827660 kernel: devtmpfs: initialized Jul 15 23:18:30.827667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 23:18:30.827674 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 23:18:30.827681 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 23:18:30.827689 kernel: 0 pages in range for non-PLT usage Jul 15 23:18:30.827696 kernel: 508432 pages in range for PLT usage Jul 15 23:18:30.827703 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 23:18:30.827709 kernel: SMBIOS 3.0.0 present. Jul 15 23:18:30.827716 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 23:18:30.827723 kernel: DMI: Memory slots populated: 1/1 Jul 15 23:18:30.827730 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 23:18:30.827737 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 23:18:30.827744 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 23:18:30.827753 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 23:18:30.827760 kernel: audit: initializing netlink subsys (disabled) Jul 15 23:18:30.827766 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 15 23:18:30.827773 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 23:18:30.827780 kernel: cpuidle: using governor menu Jul 15 23:18:30.827787 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 23:18:30.827794 kernel: ASID allocator initialised with 32768 entries Jul 15 23:18:30.827801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 23:18:30.827808 kernel: Serial: AMBA PL011 UART driver Jul 15 23:18:30.827816 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 23:18:30.827823 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 23:18:30.827841 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 23:18:30.827850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 23:18:30.827858 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 23:18:30.827865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 23:18:30.827872 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 23:18:30.827879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 23:18:30.827886 kernel: ACPI: Added _OSI(Module Device) Jul 15 23:18:30.827895 kernel: ACPI: Added _OSI(Processor Device) Jul 15 23:18:30.827902 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 23:18:30.827909 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 23:18:30.827916 kernel: ACPI: Interpreter enabled Jul 15 23:18:30.827923 kernel: ACPI: Using GIC for interrupt routing Jul 15 23:18:30.827930 kernel: ACPI: MCFG table detected, 1 entries Jul 15 23:18:30.827936 kernel: ACPI: CPU0 has been hot-added Jul 15 23:18:30.827943 kernel: ACPI: CPU1 has been hot-added Jul 15 23:18:30.827950 kernel: ACPI: CPU2 has been hot-added Jul 15 23:18:30.827957 kernel: ACPI: CPU3 has been hot-added Jul 15 23:18:30.827965 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 23:18:30.827972 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 23:18:30.827979 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 23:18:30.828103 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 23:18:30.828167 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 23:18:30.828225 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 23:18:30.828293 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 23:18:30.828355 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 23:18:30.828364 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 23:18:30.828371 kernel: PCI host bridge to bus 0000:00 Jul 15 23:18:30.828437 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 23:18:30.828491 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 23:18:30.828542 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 23:18:30.828593 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 23:18:30.828669 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 23:18:30.828738 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 23:18:30.828800 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 23:18:30.828873 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 23:18:30.828935 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 23:18:30.828994 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 23:18:30.829053 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 23:18:30.829114 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 23:18:30.829166 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 23:18:30.829218 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 23:18:30.829277 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 23:18:30.829287 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 23:18:30.829294 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 23:18:30.829301 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 23:18:30.829310 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 23:18:30.829316 kernel: iommu: Default domain type: Translated Jul 15 23:18:30.829324 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 23:18:30.829330 kernel: efivars: Registered efivars operations Jul 15 23:18:30.829337 kernel: vgaarb: loaded Jul 15 23:18:30.829344 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 23:18:30.829351 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 23:18:30.829358 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 23:18:30.829365 kernel: pnp: PnP ACPI init Jul 15 23:18:30.829434 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 23:18:30.829443 kernel: pnp: PnP ACPI: found 1 devices Jul 15 23:18:30.829450 kernel: NET: Registered PF_INET protocol family Jul 15 23:18:30.829457 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 23:18:30.829464 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 23:18:30.829471 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 23:18:30.829478 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 23:18:30.829486 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 23:18:30.829494 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 23:18:30.829501 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:18:30.829508 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 23:18:30.829515 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 23:18:30.829522 kernel: PCI: CLS 0 bytes, default 64 Jul 15 23:18:30.829529 kernel: kvm [1]: HYP mode not available Jul 15 23:18:30.829536 kernel: Initialise system trusted keyrings Jul 15 23:18:30.829543 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 23:18:30.829550 kernel: Key type asymmetric registered Jul 15 23:18:30.829558 kernel: Asymmetric key parser 'x509' registered Jul 15 23:18:30.829565 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 23:18:30.829572 kernel: io scheduler mq-deadline registered Jul 15 23:18:30.829579 kernel: io scheduler kyber registered Jul 15 23:18:30.829586 kernel: io scheduler bfq registered Jul 15 23:18:30.829593 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 23:18:30.829601 kernel: ACPI: button: Power Button [PWRB] Jul 15 23:18:30.829608 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 23:18:30.829666 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 23:18:30.829676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 23:18:30.829683 kernel: thunder_xcv, ver 1.0 Jul 15 23:18:30.829690 kernel: thunder_bgx, ver 1.0 Jul 15 23:18:30.829697 kernel: nicpf, ver 1.0 Jul 15 23:18:30.829704 kernel: nicvf, ver 1.0 Jul 15 23:18:30.829773 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 23:18:30.829837 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T23:18:30 UTC (1752621510) Jul 15 23:18:30.829847 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 23:18:30.829856 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 23:18:30.829863 kernel: watchdog: NMI not fully supported Jul 15 23:18:30.829870 kernel: watchdog: Hard watchdog permanently disabled Jul 15 23:18:30.829877 kernel: NET: Registered PF_INET6 protocol family Jul 15 23:18:30.829884 kernel: Segment Routing with IPv6 Jul 15 23:18:30.829891 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 23:18:30.829898 kernel: NET: Registered PF_PACKET protocol family Jul 15 23:18:30.829905 kernel: Key type dns_resolver registered Jul 15 23:18:30.829912 kernel: registered taskstats version 1 Jul 15 23:18:30.829918 kernel: Loading compiled-in X.509 certificates Jul 15 23:18:30.829927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 2e049b1166d7080a2074348abe7e86e115624bdd' Jul 15 23:18:30.829934 kernel: Demotion targets for Node 0: null Jul 15 23:18:30.829941 kernel: Key type .fscrypt registered Jul 15 23:18:30.829947 kernel: Key type fscrypt-provisioning registered Jul 15 23:18:30.829954 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 23:18:30.829961 kernel: ima: Allocated hash algorithm: sha1 Jul 15 23:18:30.829968 kernel: ima: No architecture policies found Jul 15 23:18:30.829975 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 23:18:30.829983 kernel: clk: Disabling unused clocks Jul 15 23:18:30.829990 kernel: PM: genpd: Disabling unused power domains Jul 15 23:18:30.829997 kernel: Warning: unable to open an initial console. Jul 15 23:18:30.830005 kernel: Freeing unused kernel memory: 39488K Jul 15 23:18:30.830011 kernel: Run /init as init process Jul 15 23:18:30.830018 kernel: with arguments: Jul 15 23:18:30.830025 kernel: /init Jul 15 23:18:30.830032 kernel: with environment: Jul 15 23:18:30.830039 kernel: HOME=/ Jul 15 23:18:30.830047 kernel: TERM=linux Jul 15 23:18:30.830054 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 23:18:30.830061 systemd[1]: Successfully made /usr/ read-only. Jul 15 23:18:30.830072 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:18:30.830080 systemd[1]: Detected virtualization kvm. Jul 15 23:18:30.830087 systemd[1]: Detected architecture arm64. Jul 15 23:18:30.830094 systemd[1]: Running in initrd. Jul 15 23:18:30.830101 systemd[1]: No hostname configured, using default hostname. Jul 15 23:18:30.830111 systemd[1]: Hostname set to . Jul 15 23:18:30.830118 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:18:30.830126 systemd[1]: Queued start job for default target initrd.target. Jul 15 23:18:30.830133 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:18:30.830141 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:18:30.830149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 23:18:30.830157 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:18:30.830164 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 23:18:30.830174 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 23:18:30.830182 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 23:18:30.830190 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 23:18:30.830197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:18:30.830205 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:18:30.830213 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:18:30.830220 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:18:30.830229 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:18:30.830236 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:18:30.830250 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:18:30.830258 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:18:30.830266 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 23:18:30.830273 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 23:18:30.830281 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:18:30.830289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:18:30.830298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:18:30.830306 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:18:30.830313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 23:18:30.830321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:18:30.830328 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 23:18:30.830336 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 23:18:30.830344 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 23:18:30.830351 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:18:30.830359 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:18:30.830368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:18:30.830375 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 23:18:30.830384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:18:30.830391 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 23:18:30.830400 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 23:18:30.830424 systemd-journald[244]: Collecting audit messages is disabled. Jul 15 23:18:30.830442 systemd-journald[244]: Journal started Jul 15 23:18:30.830461 systemd-journald[244]: Runtime Journal (/run/log/journal/42e960440a504ab194ce22278d0d937d) is 6M, max 48.5M, 42.4M free. Jul 15 23:18:30.834145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:18:30.823232 systemd-modules-load[245]: Inserted module 'overlay' Jul 15 23:18:30.838854 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:18:30.838881 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 23:18:30.840982 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 23:18:30.843916 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 15 23:18:30.844880 kernel: Bridge firewalling registered Jul 15 23:18:30.845156 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:18:30.848937 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 23:18:30.850574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:18:30.852558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:18:30.862963 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:18:30.868579 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:18:30.869877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:18:30.873905 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 23:18:30.876740 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:18:30.884946 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:18:30.897950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:18:30.901005 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 23:18:30.925472 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6efbcbd16e8e41b645be9f8e34b328753e37d282675200dab08e504f8e58a578 Jul 15 23:18:30.926090 systemd-resolved[283]: Positive Trust Anchors: Jul 15 23:18:30.926099 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:18:30.926137 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:18:30.931011 systemd-resolved[283]: Defaulting to hostname 'linux'. Jul 15 23:18:30.932012 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:18:30.933626 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:18:30.998868 kernel: SCSI subsystem initialized Jul 15 23:18:31.003847 kernel: Loading iSCSI transport class v2.0-870. Jul 15 23:18:31.010864 kernel: iscsi: registered transport (tcp) Jul 15 23:18:31.025216 kernel: iscsi: registered transport (qla4xxx) Jul 15 23:18:31.025249 kernel: QLogic iSCSI HBA Driver Jul 15 23:18:31.041028 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:18:31.056981 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:18:31.059077 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:18:31.104872 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 23:18:31.107289 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 23:18:31.176894 kernel: raid6: neonx8 gen() 15789 MB/s Jul 15 23:18:31.193854 kernel: raid6: neonx4 gen() 15818 MB/s Jul 15 23:18:31.210862 kernel: raid6: neonx2 gen() 13139 MB/s Jul 15 23:18:31.227858 kernel: raid6: neonx1 gen() 10434 MB/s Jul 15 23:18:31.244869 kernel: raid6: int64x8 gen() 6890 MB/s Jul 15 23:18:31.261853 kernel: raid6: int64x4 gen() 7353 MB/s Jul 15 23:18:31.278851 kernel: raid6: int64x2 gen() 6093 MB/s Jul 15 23:18:31.296016 kernel: raid6: int64x1 gen() 5055 MB/s Jul 15 23:18:31.296029 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s Jul 15 23:18:31.314011 kernel: raid6: .... xor() 12416 MB/s, rmw enabled Jul 15 23:18:31.314036 kernel: raid6: using neon recovery algorithm Jul 15 23:18:31.321151 kernel: xor: measuring software checksum speed Jul 15 23:18:31.321179 kernel: 8regs : 21636 MB/sec Jul 15 23:18:31.321849 kernel: 32regs : 20770 MB/sec Jul 15 23:18:31.323123 kernel: arm64_neon : 24513 MB/sec Jul 15 23:18:31.323136 kernel: xor: using function: arm64_neon (24513 MB/sec) Jul 15 23:18:31.380882 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 23:18:31.386778 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:18:31.389468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:18:31.414694 systemd-udevd[501]: Using default interface naming scheme 'v255'. Jul 15 23:18:31.418808 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:18:31.420982 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 23:18:31.445034 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jul 15 23:18:31.466380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:18:31.468923 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:18:31.529228 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:18:31.532363 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 23:18:31.579803 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 23:18:31.579978 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 23:18:31.594350 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 23:18:31.594393 kernel: GPT:9289727 != 19775487 Jul 15 23:18:31.594403 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 23:18:31.594412 kernel: GPT:9289727 != 19775487 Jul 15 23:18:31.595333 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 23:18:31.595413 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:18:31.596929 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:18:31.595489 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:18:31.599741 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:18:31.602591 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:18:31.627347 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 23:18:31.629101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:18:31.637470 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 23:18:31.646391 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 23:18:31.653375 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 23:18:31.654581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 23:18:31.669185 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:18:31.670397 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:18:31.672433 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:18:31.674512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:18:31.677178 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 23:18:31.678982 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 23:18:31.699568 disk-uuid[593]: Primary Header is updated. Jul 15 23:18:31.699568 disk-uuid[593]: Secondary Entries is updated. Jul 15 23:18:31.699568 disk-uuid[593]: Secondary Header is updated. Jul 15 23:18:31.701639 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:18:31.708854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:18:31.711850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:18:32.721859 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 23:18:32.722328 disk-uuid[599]: The operation has completed successfully. Jul 15 23:18:32.749740 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 23:18:32.749861 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 23:18:32.778630 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 23:18:32.801670 sh[614]: Success Jul 15 23:18:32.815231 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 23:18:32.815287 kernel: device-mapper: uevent: version 1.0.3 Jul 15 23:18:32.816370 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 23:18:32.823853 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 23:18:32.854638 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 23:18:32.857247 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 23:18:32.870731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 23:18:32.875851 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 23:18:32.875879 kernel: BTRFS: device fsid e70e9257-c19d-4e0a-b2ee-631da7d0eb2b devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (626) Jul 15 23:18:32.878658 kernel: BTRFS info (device dm-0): first mount of filesystem e70e9257-c19d-4e0a-b2ee-631da7d0eb2b Jul 15 23:18:32.878686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:18:32.880269 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 23:18:32.884947 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 23:18:32.886216 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:18:32.887644 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 23:18:32.888401 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 23:18:32.889993 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 23:18:32.918843 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (656) Jul 15 23:18:32.918887 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:18:32.918898 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:18:32.919808 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:18:32.926849 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:18:32.927509 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 23:18:32.930001 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 23:18:32.990873 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:18:32.994098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:18:33.039408 systemd-networkd[799]: lo: Link UP Jul 15 23:18:33.039418 systemd-networkd[799]: lo: Gained carrier Jul 15 23:18:33.040186 systemd-networkd[799]: Enumeration completed Jul 15 23:18:33.040315 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:18:33.040699 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:18:33.040702 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:18:33.041597 systemd-networkd[799]: eth0: Link UP Jul 15 23:18:33.041600 systemd-networkd[799]: eth0: Gained carrier Jul 15 23:18:33.041608 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:18:33.041786 systemd[1]: Reached target network.target - Network. Jul 15 23:18:33.073977 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:18:33.075935 ignition[706]: Ignition 2.21.0 Jul 15 23:18:33.075941 ignition[706]: Stage: fetch-offline Jul 15 23:18:33.075968 ignition[706]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:18:33.075975 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:18:33.076207 ignition[706]: parsed url from cmdline: "" Jul 15 23:18:33.076210 ignition[706]: no config URL provided Jul 15 23:18:33.076214 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 23:18:33.076221 ignition[706]: no config at "/usr/lib/ignition/user.ign" Jul 15 23:18:33.076252 ignition[706]: op(1): [started] loading QEMU firmware config module Jul 15 23:18:33.076259 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 23:18:33.082975 ignition[706]: op(1): [finished] loading QEMU firmware config module Jul 15 23:18:33.119539 ignition[706]: parsing config with SHA512: af34af50c86dcd846b655636dda664130a786b280111db1db363c28d6e839d6148322f18391f1211bfdbdc5da67698f18434d6036ad4f0a2c80cb245d1bba3f7 Jul 15 23:18:33.125800 unknown[706]: fetched base config from "system" Jul 15 23:18:33.125819 unknown[706]: fetched user config from "qemu" Jul 15 23:18:33.126248 ignition[706]: fetch-offline: fetch-offline passed Jul 15 23:18:33.126317 ignition[706]: Ignition finished successfully Jul 15 23:18:33.129444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:18:33.130810 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 23:18:33.131584 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 23:18:33.159007 ignition[812]: Ignition 2.21.0 Jul 15 23:18:33.159022 ignition[812]: Stage: kargs Jul 15 23:18:33.159169 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:18:33.159178 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:18:33.160396 ignition[812]: kargs: kargs passed Jul 15 23:18:33.160472 ignition[812]: Ignition finished successfully Jul 15 23:18:33.164249 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 23:18:33.166616 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 23:18:33.191425 ignition[820]: Ignition 2.21.0 Jul 15 23:18:33.191440 ignition[820]: Stage: disks Jul 15 23:18:33.191568 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jul 15 23:18:33.191577 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:18:33.192619 ignition[820]: disks: disks passed Jul 15 23:18:33.194230 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 23:18:33.192684 ignition[820]: Ignition finished successfully Jul 15 23:18:33.195823 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 23:18:33.197210 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 23:18:33.199108 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:18:33.200633 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:18:33.202479 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:18:33.205163 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 23:18:33.236261 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 23:18:33.240245 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 23:18:33.242567 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 23:18:33.317852 kernel: EXT4-fs (vda9): mounted filesystem db08fdf6-07fd-45a1-bb3b-a7d0399d70fd r/w with ordered data mode. Quota mode: none. Jul 15 23:18:33.318469 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 23:18:33.319730 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 23:18:33.323845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:18:33.334394 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 23:18:33.335454 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 23:18:33.335498 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 23:18:33.335529 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:18:33.347176 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (838) Jul 15 23:18:33.347200 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:18:33.347211 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:18:33.347220 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:18:33.342503 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 23:18:33.344860 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 23:18:33.353150 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:18:33.389379 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 23:18:33.393940 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Jul 15 23:18:33.398355 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 23:18:33.401957 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 23:18:33.485508 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 23:18:33.487532 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 23:18:33.489067 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 23:18:33.514873 kernel: BTRFS info (device vda6): last unmount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:18:33.527463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 23:18:33.533349 ignition[953]: INFO : Ignition 2.21.0 Jul 15 23:18:33.533349 ignition[953]: INFO : Stage: mount Jul 15 23:18:33.534993 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:18:33.534993 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:18:33.534993 ignition[953]: INFO : mount: mount passed Jul 15 23:18:33.539858 ignition[953]: INFO : Ignition finished successfully Jul 15 23:18:33.537486 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 23:18:33.540130 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 23:18:33.875982 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 23:18:33.877527 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 23:18:33.904843 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (966) Jul 15 23:18:33.906883 kernel: BTRFS info (device vda6): first mount of filesystem b155db48-94d7-40af-bc6d-97d496102c15 Jul 15 23:18:33.906902 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 23:18:33.906911 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 23:18:33.910057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 23:18:33.940272 ignition[983]: INFO : Ignition 2.21.0 Jul 15 23:18:33.940272 ignition[983]: INFO : Stage: files Jul 15 23:18:33.942102 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:18:33.942102 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:18:33.944423 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Jul 15 23:18:33.944423 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 23:18:33.944423 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 23:18:33.948639 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 23:18:33.948639 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 23:18:33.948639 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 23:18:33.948639 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:18:33.948639 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 15 23:18:33.945489 unknown[983]: wrote ssh authorized keys file for user: core Jul 15 23:18:34.031427 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 23:18:34.520953 systemd-networkd[799]: eth0: Gained IPv6LL Jul 15 23:18:34.610773 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 15 23:18:34.610773 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:18:34.610773 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 23:18:34.803844 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 23:18:34.925214 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:18:34.927251 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 23:18:34.944355 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:18:34.946788 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 23:18:34.946788 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:18:34.951669 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:18:34.951669 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:18:34.951669 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 15 23:18:35.452284 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 23:18:36.038732 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 15 23:18:36.038732 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 23:18:36.046063 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 23:18:36.049098 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 23:18:36.069929 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:18:36.073391 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 23:18:36.076400 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 23:18:36.076400 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 23:18:36.076400 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 23:18:36.076400 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:18:36.076400 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 23:18:36.076400 ignition[983]: INFO : files: files passed Jul 15 23:18:36.076400 ignition[983]: INFO : Ignition finished successfully Jul 15 23:18:36.079162 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 23:18:36.082961 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 23:18:36.096000 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 23:18:36.098708 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 23:18:36.098790 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 23:18:36.102624 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 23:18:36.104915 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:18:36.106644 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:18:36.108183 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 23:18:36.107520 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:18:36.109587 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 23:18:36.112963 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 23:18:36.156681 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 23:18:36.156787 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 23:18:36.160166 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 23:18:36.163796 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 23:18:36.165701 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 23:18:36.166506 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 23:18:36.192293 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:18:36.194797 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 23:18:36.218185 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:18:36.219514 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:18:36.221574 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 23:18:36.223386 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 23:18:36.223521 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 23:18:36.226009 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 23:18:36.228035 systemd[1]: Stopped target basic.target - Basic System. Jul 15 23:18:36.229674 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 23:18:36.231536 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 23:18:36.234557 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 23:18:36.243480 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 23:18:36.245898 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 23:18:36.248455 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 23:18:36.250480 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 23:18:36.252485 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 23:18:36.254307 systemd[1]: Stopped target swap.target - Swaps. Jul 15 23:18:36.256425 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 23:18:36.256564 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 23:18:36.258933 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:18:36.260876 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:18:36.262957 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 23:18:36.263920 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:18:36.265194 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 23:18:36.265327 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 23:18:36.268007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 23:18:36.268129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 23:18:36.270086 systemd[1]: Stopped target paths.target - Path Units. Jul 15 23:18:36.271634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 23:18:36.275881 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:18:36.277142 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 23:18:36.279008 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 23:18:36.280451 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 23:18:36.280536 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 23:18:36.281947 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 23:18:36.282026 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 23:18:36.283471 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 23:18:36.283587 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 23:18:36.285332 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 23:18:36.285436 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 23:18:36.287671 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 23:18:36.289276 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 23:18:36.289402 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:18:36.302400 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 23:18:36.303251 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 23:18:36.303372 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:18:36.305089 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 23:18:36.305198 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 23:18:36.311263 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 23:18:36.311455 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 23:18:36.315167 ignition[1038]: INFO : Ignition 2.21.0 Jul 15 23:18:36.315167 ignition[1038]: INFO : Stage: umount Jul 15 23:18:36.317920 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 23:18:36.317920 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 23:18:36.316448 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 23:18:36.321040 ignition[1038]: INFO : umount: umount passed Jul 15 23:18:36.321040 ignition[1038]: INFO : Ignition finished successfully Jul 15 23:18:36.321072 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 23:18:36.321158 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 23:18:36.323187 systemd[1]: Stopped target network.target - Network. Jul 15 23:18:36.324462 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 23:18:36.324533 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 23:18:36.326285 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 23:18:36.326331 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 23:18:36.328005 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 23:18:36.328053 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 23:18:36.329688 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 23:18:36.329729 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 23:18:36.331742 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 23:18:36.333522 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 23:18:36.338658 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 23:18:36.338759 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 23:18:36.341623 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 23:18:36.341923 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 23:18:36.341962 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:18:36.344822 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 23:18:36.345775 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 23:18:36.345966 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 23:18:36.350613 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 23:18:36.350737 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 23:18:36.352534 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 23:18:36.352567 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:18:36.355173 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 23:18:36.356451 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 23:18:36.356505 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 23:18:36.358817 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:18:36.358873 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:18:36.361483 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 23:18:36.361524 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 23:18:36.363813 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:18:36.368578 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 23:18:36.384107 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 23:18:36.390280 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:18:36.391824 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 23:18:36.393720 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 23:18:36.394799 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 23:18:36.394931 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 23:18:36.397126 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 23:18:36.397176 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 23:18:36.398280 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 23:18:36.398310 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:18:36.399298 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 23:18:36.399342 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 23:18:36.401924 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 23:18:36.401972 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 23:18:36.404664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 23:18:36.404711 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 23:18:36.406748 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 23:18:36.406796 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 23:18:36.409335 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 23:18:36.410395 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 23:18:36.410447 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:18:36.413192 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 23:18:36.413242 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:18:36.416325 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 23:18:36.416367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:18:36.439939 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 23:18:36.441052 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 23:18:36.442370 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 23:18:36.444795 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 23:18:36.477630 systemd[1]: Switching root. Jul 15 23:18:36.516073 systemd-journald[244]: Journal stopped Jul 15 23:18:37.367568 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 15 23:18:37.367619 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 23:18:37.367635 kernel: SELinux: policy capability open_perms=1 Jul 15 23:18:37.367644 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 23:18:37.367655 kernel: SELinux: policy capability always_check_network=0 Jul 15 23:18:37.367665 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 23:18:37.367675 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 23:18:37.367684 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 23:18:37.367693 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 23:18:37.367702 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 23:18:37.367721 kernel: audit: type=1403 audit(1752621516.695:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 23:18:37.367735 systemd[1]: Successfully loaded SELinux policy in 57.860ms. Jul 15 23:18:37.367754 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.294ms. Jul 15 23:18:37.367765 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 23:18:37.367780 systemd[1]: Detected virtualization kvm. Jul 15 23:18:37.367790 systemd[1]: Detected architecture arm64. Jul 15 23:18:37.367799 systemd[1]: Detected first boot. Jul 15 23:18:37.367811 systemd[1]: Initializing machine ID from VM UUID. Jul 15 23:18:37.367821 zram_generator::config[1085]: No configuration found. Jul 15 23:18:37.367896 kernel: NET: Registered PF_VSOCK protocol family Jul 15 23:18:37.367907 systemd[1]: Populated /etc with preset unit settings. Jul 15 23:18:37.367918 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 23:18:37.367928 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 23:18:37.367938 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 23:18:37.367947 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 23:18:37.367957 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 23:18:37.367971 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 23:18:37.367981 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 23:18:37.367992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 23:18:37.368003 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 23:18:37.368012 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 23:18:37.368022 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 23:18:37.368032 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 23:18:37.368041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 23:18:37.368051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 23:18:37.368061 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 23:18:37.368070 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 23:18:37.368085 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 23:18:37.368096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 23:18:37.368106 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 23:18:37.368116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 23:18:37.368126 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 23:18:37.368135 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 23:18:37.368145 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 23:18:37.368177 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 23:18:37.368187 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 23:18:37.368197 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 23:18:37.368208 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 23:18:37.368218 systemd[1]: Reached target slices.target - Slice Units. Jul 15 23:18:37.368234 systemd[1]: Reached target swap.target - Swaps. Jul 15 23:18:37.368246 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 23:18:37.368256 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 23:18:37.368266 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 23:18:37.368276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 23:18:37.368288 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 23:18:37.368298 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 23:18:37.368309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 23:18:37.368318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 23:18:37.368328 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 23:18:37.368339 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 23:18:37.368349 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 23:18:37.368359 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 23:18:37.368369 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 23:18:37.368380 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 23:18:37.368390 systemd[1]: Reached target machines.target - Containers. Jul 15 23:18:37.368399 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 23:18:37.368409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:18:37.368419 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 23:18:37.368429 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 23:18:37.368439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:18:37.368449 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:18:37.368460 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:18:37.368470 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 23:18:37.368479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:18:37.368489 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 23:18:37.368499 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 23:18:37.368509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 23:18:37.368519 kernel: fuse: init (API version 7.41) Jul 15 23:18:37.368528 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 23:18:37.368539 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 23:18:37.368549 kernel: loop: module loaded Jul 15 23:18:37.368559 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:18:37.368569 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 23:18:37.368579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 23:18:37.368589 kernel: ACPI: bus type drm_connector registered Jul 15 23:18:37.368598 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 23:18:37.368608 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 23:18:37.368617 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 23:18:37.368628 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 23:18:37.368638 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 23:18:37.368648 systemd[1]: Stopped verity-setup.service. Jul 15 23:18:37.368658 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 23:18:37.368679 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 23:18:37.368695 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 23:18:37.368704 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 23:18:37.368735 systemd-journald[1155]: Collecting audit messages is disabled. Jul 15 23:18:37.368758 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 23:18:37.368769 systemd-journald[1155]: Journal started Jul 15 23:18:37.368790 systemd-journald[1155]: Runtime Journal (/run/log/journal/42e960440a504ab194ce22278d0d937d) is 6M, max 48.5M, 42.4M free. Jul 15 23:18:37.140253 systemd[1]: Queued start job for default target multi-user.target. Jul 15 23:18:37.150797 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 23:18:37.151197 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 23:18:37.371855 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 23:18:37.372353 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 23:18:37.374859 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 23:18:37.376190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 23:18:37.377652 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 23:18:37.377811 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 23:18:37.379155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:18:37.379322 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:18:37.380751 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:18:37.381940 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:18:37.383165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:18:37.383334 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:18:37.384886 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 23:18:37.385041 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 23:18:37.386357 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:18:37.386497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:18:37.387824 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 23:18:37.389876 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 23:18:37.391335 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 23:18:37.392798 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 23:18:37.405274 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 23:18:37.407681 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 23:18:37.409687 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 23:18:37.410879 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 23:18:37.410905 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 23:18:37.412803 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 23:18:37.422651 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 23:18:37.423984 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:18:37.425078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 23:18:37.426927 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 23:18:37.428366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:18:37.430991 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 23:18:37.432436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:18:37.434119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:18:37.434245 systemd-journald[1155]: Time spent on flushing to /var/log/journal/42e960440a504ab194ce22278d0d937d is 24.811ms for 885 entries. Jul 15 23:18:37.434245 systemd-journald[1155]: System Journal (/var/log/journal/42e960440a504ab194ce22278d0d937d) is 8M, max 195.6M, 187.6M free. Jul 15 23:18:37.464503 systemd-journald[1155]: Received client request to flush runtime journal. Jul 15 23:18:37.437207 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 23:18:37.440851 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 23:18:37.445864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 23:18:37.447272 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 23:18:37.449119 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 23:18:37.452258 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 23:18:37.454621 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 23:18:37.458006 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 23:18:37.467511 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 23:18:37.471776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:18:37.473598 kernel: loop0: detected capacity change from 0 to 107312 Jul 15 23:18:37.486171 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 23:18:37.490657 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 23:18:37.492853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 23:18:37.497999 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 23:18:37.513641 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 15 23:18:37.513659 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 15 23:18:37.517547 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 23:18:37.522852 kernel: loop1: detected capacity change from 0 to 138376 Jul 15 23:18:37.552874 kernel: loop2: detected capacity change from 0 to 207008 Jul 15 23:18:37.573857 kernel: loop3: detected capacity change from 0 to 107312 Jul 15 23:18:37.578857 kernel: loop4: detected capacity change from 0 to 138376 Jul 15 23:18:37.585844 kernel: loop5: detected capacity change from 0 to 207008 Jul 15 23:18:37.590432 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 23:18:37.590795 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 15 23:18:37.594211 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 23:18:37.594235 systemd[1]: Reloading... Jul 15 23:18:37.656864 zram_generator::config[1252]: No configuration found. Jul 15 23:18:37.715348 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 23:18:37.726680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:18:37.789532 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 23:18:37.789809 systemd[1]: Reloading finished in 195 ms. Jul 15 23:18:37.814319 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 23:18:37.816926 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 23:18:37.831070 systemd[1]: Starting ensure-sysext.service... Jul 15 23:18:37.832923 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 23:18:37.856452 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 23:18:37.856731 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 23:18:37.856960 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 15 23:18:37.856970 systemd[1]: Reloading... Jul 15 23:18:37.856980 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 23:18:37.857179 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 23:18:37.857796 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 23:18:37.858424 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 15 23:18:37.858540 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 15 23:18:37.873883 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:18:37.874077 systemd-tmpfiles[1284]: Skipping /boot Jul 15 23:18:37.884036 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 23:18:37.884157 systemd-tmpfiles[1284]: Skipping /boot Jul 15 23:18:37.910856 zram_generator::config[1312]: No configuration found. Jul 15 23:18:37.976786 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:18:38.040595 systemd[1]: Reloading finished in 183 ms. Jul 15 23:18:38.063475 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 23:18:38.069512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 23:18:38.087163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:18:38.090555 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 23:18:38.093039 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 23:18:38.096987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 23:18:38.099755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 23:18:38.104121 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 23:18:38.110333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:18:38.111489 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:18:38.113681 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:18:38.115934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:18:38.119034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:18:38.119155 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:18:38.126011 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 23:18:38.128673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:18:38.129882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:18:38.135087 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 23:18:38.138491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:18:38.138736 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:18:38.139425 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Jul 15 23:18:38.140465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:18:38.146020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:18:38.152459 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 23:18:38.157179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 23:18:38.159139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 23:18:38.161572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 23:18:38.163785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 23:18:38.171394 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 23:18:38.172675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 23:18:38.172793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 23:18:38.172912 augenrules[1390]: No rules Jul 15 23:18:38.173979 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 23:18:38.176041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 23:18:38.179644 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:18:38.179848 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:18:38.181413 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 23:18:38.183164 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 23:18:38.183325 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 23:18:38.184939 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 23:18:38.185097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 23:18:38.186942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 23:18:38.187074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 23:18:38.189686 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 23:18:38.196848 systemd[1]: Finished ensure-sysext.service. Jul 15 23:18:38.200684 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 23:18:38.219783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 23:18:38.220987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 23:18:38.226108 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 23:18:38.227282 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 23:18:38.229320 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 23:18:38.230150 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 23:18:38.234156 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 23:18:38.257360 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 23:18:38.287669 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 23:18:38.291320 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 23:18:38.328397 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 23:18:38.339355 systemd-resolved[1351]: Positive Trust Anchors: Jul 15 23:18:38.339376 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 23:18:38.339408 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 23:18:38.343295 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 23:18:38.344813 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 23:18:38.352616 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 15 23:18:38.355808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 23:18:38.357079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 23:18:38.358291 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 23:18:38.358297 systemd-networkd[1425]: lo: Link UP Jul 15 23:18:38.358301 systemd-networkd[1425]: lo: Gained carrier Jul 15 23:18:38.359124 systemd-networkd[1425]: Enumeration completed Jul 15 23:18:38.359467 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 23:18:38.359561 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:18:38.359570 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 23:18:38.360825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 23:18:38.361037 systemd-networkd[1425]: eth0: Link UP Jul 15 23:18:38.361196 systemd-networkd[1425]: eth0: Gained carrier Jul 15 23:18:38.361211 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 23:18:38.362266 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 23:18:38.363543 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 23:18:38.364820 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 23:18:38.366086 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 23:18:38.366120 systemd[1]: Reached target paths.target - Path Units. Jul 15 23:18:38.367038 systemd[1]: Reached target timers.target - Timer Units. Jul 15 23:18:38.369526 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 23:18:38.372189 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 23:18:38.375427 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 23:18:38.376867 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 23:18:38.378088 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 23:18:38.380287 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 23:18:38.380919 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Jul 15 23:18:38.381798 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 23:18:38.382236 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 23:18:38.382292 systemd-timesyncd[1431]: Initial clock synchronization to Tue 2025-07-15 23:18:38.684425 UTC. Jul 15 23:18:38.383870 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 23:18:38.385858 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 23:18:38.387410 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 23:18:38.390879 systemd[1]: Reached target network.target - Network. Jul 15 23:18:38.391775 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 23:18:38.392735 systemd[1]: Reached target basic.target - Basic System. Jul 15 23:18:38.393709 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:18:38.393742 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 23:18:38.395075 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 23:18:38.397381 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 23:18:38.399982 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 23:18:38.403016 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 23:18:38.405619 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 23:18:38.406678 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 23:18:38.411283 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 23:18:38.413257 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 23:18:38.415387 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 23:18:38.418059 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 23:18:38.422676 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 23:18:38.424678 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 23:18:38.426729 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 23:18:38.428825 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 23:18:38.430313 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 23:18:38.431770 jq[1460]: false Jul 15 23:18:38.432264 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 23:18:38.437966 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 23:18:38.441734 extend-filesystems[1461]: Found /dev/vda6 Jul 15 23:18:38.446060 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 23:18:38.449902 jq[1477]: true Jul 15 23:18:38.450321 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 23:18:38.450514 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 23:18:38.455598 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 23:18:38.455822 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 23:18:38.460348 extend-filesystems[1461]: Found /dev/vda9 Jul 15 23:18:38.463941 extend-filesystems[1461]: Checking size of /dev/vda9 Jul 15 23:18:38.492513 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 23:18:38.505361 extend-filesystems[1461]: Resized partition /dev/vda9 Jul 15 23:18:38.506632 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 23:18:38.506758 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 23:18:38.509210 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 23:18:38.516316 tar[1482]: linux-arm64/LICENSE Jul 15 23:18:38.516557 tar[1482]: linux-arm64/helm Jul 15 23:18:38.521139 jq[1488]: true Jul 15 23:18:38.524339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 23:18:38.528169 dbus-daemon[1456]: [system] SELinux support is enabled Jul 15 23:18:38.529045 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 23:18:38.532703 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 23:18:38.532864 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 23:18:38.534244 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 23:18:38.534350 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 23:18:38.552263 extend-filesystems[1507]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 23:18:38.561289 update_engine[1476]: I20250715 23:18:38.561053 1476 main.cc:92] Flatcar Update Engine starting Jul 15 23:18:38.570130 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 23:18:38.569717 systemd[1]: Started update-engine.service - Update Engine. Jul 15 23:18:38.570272 update_engine[1476]: I20250715 23:18:38.568467 1476 update_check_scheduler.cc:74] Next update check in 6m59s Jul 15 23:18:38.576238 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 23:18:38.594930 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 23:18:38.613466 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 23:18:38.613466 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 23:18:38.613466 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 23:18:38.621799 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Jul 15 23:18:38.617092 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 23:18:38.629215 bash[1527]: Updated "/home/core/.ssh/authorized_keys" Jul 15 23:18:38.617390 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 23:18:38.639341 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 23:18:38.644985 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 23:18:38.700208 locksmithd[1514]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 23:18:38.711906 systemd-logind[1473]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 23:18:38.715346 systemd-logind[1473]: New seat seat0. Jul 15 23:18:38.718074 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 23:18:38.728915 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 23:18:38.840026 containerd[1497]: time="2025-07-15T23:18:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 23:18:38.841234 containerd[1497]: time="2025-07-15T23:18:38.841192440Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 15 23:18:38.853078 containerd[1497]: time="2025-07-15T23:18:38.853021120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.32µs" Jul 15 23:18:38.853162 containerd[1497]: time="2025-07-15T23:18:38.853110840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 23:18:38.853162 containerd[1497]: time="2025-07-15T23:18:38.853134120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 23:18:38.853472 containerd[1497]: time="2025-07-15T23:18:38.853400480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 23:18:38.853499 containerd[1497]: time="2025-07-15T23:18:38.853475200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 23:18:38.853517 containerd[1497]: time="2025-07-15T23:18:38.853504080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:18:38.853640 containerd[1497]: time="2025-07-15T23:18:38.853616040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 23:18:38.853694 containerd[1497]: time="2025-07-15T23:18:38.853676960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:18:38.855507 containerd[1497]: time="2025-07-15T23:18:38.855348840Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 23:18:38.855601 containerd[1497]: time="2025-07-15T23:18:38.855480960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:18:38.855682 containerd[1497]: time="2025-07-15T23:18:38.855661240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 23:18:38.855739 containerd[1497]: time="2025-07-15T23:18:38.855680840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 23:18:38.855818 containerd[1497]: time="2025-07-15T23:18:38.855797440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 23:18:38.856341 containerd[1497]: time="2025-07-15T23:18:38.856295240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:18:38.856383 containerd[1497]: time="2025-07-15T23:18:38.856344640Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 23:18:38.856426 containerd[1497]: time="2025-07-15T23:18:38.856399960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 23:18:38.856459 containerd[1497]: time="2025-07-15T23:18:38.856447520Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 23:18:38.856935 containerd[1497]: time="2025-07-15T23:18:38.856910120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 23:18:38.857100 containerd[1497]: time="2025-07-15T23:18:38.857036160Z" level=info msg="metadata content store policy set" policy=shared Jul 15 23:18:38.862177 containerd[1497]: time="2025-07-15T23:18:38.862139560Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 23:18:38.862267 containerd[1497]: time="2025-07-15T23:18:38.862245280Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 23:18:38.862295 containerd[1497]: time="2025-07-15T23:18:38.862269120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 23:18:38.862295 containerd[1497]: time="2025-07-15T23:18:38.862284240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 23:18:38.862380 containerd[1497]: time="2025-07-15T23:18:38.862346320Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 23:18:38.862428 containerd[1497]: time="2025-07-15T23:18:38.862383360Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 23:18:38.862447 containerd[1497]: time="2025-07-15T23:18:38.862432600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 23:18:38.862464 containerd[1497]: time="2025-07-15T23:18:38.862450520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 23:18:38.862482 containerd[1497]: time="2025-07-15T23:18:38.862464040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 23:18:38.862482 containerd[1497]: time="2025-07-15T23:18:38.862474840Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 23:18:38.862547 containerd[1497]: time="2025-07-15T23:18:38.862484600Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 23:18:38.862578 containerd[1497]: time="2025-07-15T23:18:38.862548600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 23:18:38.862744 containerd[1497]: time="2025-07-15T23:18:38.862724320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 23:18:38.862818 containerd[1497]: time="2025-07-15T23:18:38.862802240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 23:18:38.862864 containerd[1497]: time="2025-07-15T23:18:38.862825280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 23:18:38.862935 containerd[1497]: time="2025-07-15T23:18:38.862869000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 23:18:38.862958 containerd[1497]: time="2025-07-15T23:18:38.862941360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 23:18:38.862977 containerd[1497]: time="2025-07-15T23:18:38.862956440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 23:18:38.862977 containerd[1497]: time="2025-07-15T23:18:38.862970520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 23:18:38.863030 containerd[1497]: time="2025-07-15T23:18:38.862989120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 23:18:38.863078 containerd[1497]: time="2025-07-15T23:18:38.863061720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 23:18:38.863100 containerd[1497]: time="2025-07-15T23:18:38.863086760Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 23:18:38.863123 containerd[1497]: time="2025-07-15T23:18:38.863102080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 23:18:38.863397 containerd[1497]: time="2025-07-15T23:18:38.863363720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 23:18:38.863397 containerd[1497]: time="2025-07-15T23:18:38.863390800Z" level=info msg="Start snapshots syncer" Jul 15 23:18:38.863458 containerd[1497]: time="2025-07-15T23:18:38.863422400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 23:18:38.863723 containerd[1497]: time="2025-07-15T23:18:38.863688320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 23:18:38.863842 containerd[1497]: time="2025-07-15T23:18:38.863744120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 23:18:38.863842 containerd[1497]: time="2025-07-15T23:18:38.863812680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 23:18:38.864147 containerd[1497]: time="2025-07-15T23:18:38.864120440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 23:18:38.864226 containerd[1497]: time="2025-07-15T23:18:38.864203360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 23:18:38.864251 containerd[1497]: time="2025-07-15T23:18:38.864232200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 23:18:38.864251 containerd[1497]: time="2025-07-15T23:18:38.864245840Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 23:18:38.864285 containerd[1497]: time="2025-07-15T23:18:38.864257520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 23:18:38.864285 containerd[1497]: time="2025-07-15T23:18:38.864269920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 23:18:38.864335 containerd[1497]: time="2025-07-15T23:18:38.864320920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 23:18:38.864380 containerd[1497]: time="2025-07-15T23:18:38.864366720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 23:18:38.864400 containerd[1497]: time="2025-07-15T23:18:38.864382640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 23:18:38.864470 containerd[1497]: time="2025-07-15T23:18:38.864456440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 23:18:38.864517 containerd[1497]: time="2025-07-15T23:18:38.864504640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:18:38.864545 containerd[1497]: time="2025-07-15T23:18:38.864523040Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 23:18:38.864607 containerd[1497]: time="2025-07-15T23:18:38.864532080Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:18:38.864633 containerd[1497]: time="2025-07-15T23:18:38.864604640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 23:18:38.864633 containerd[1497]: time="2025-07-15T23:18:38.864614480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 23:18:38.864633 containerd[1497]: time="2025-07-15T23:18:38.864624680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 23:18:38.864687 containerd[1497]: time="2025-07-15T23:18:38.864635160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 23:18:38.864767 containerd[1497]: time="2025-07-15T23:18:38.864751160Z" level=info msg="runtime interface created" Jul 15 23:18:38.864767 containerd[1497]: time="2025-07-15T23:18:38.864762920Z" level=info msg="created NRI interface" Jul 15 23:18:38.864809 containerd[1497]: time="2025-07-15T23:18:38.864774080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 23:18:38.864809 containerd[1497]: time="2025-07-15T23:18:38.864788080Z" level=info msg="Connect containerd service" Jul 15 23:18:38.864895 containerd[1497]: time="2025-07-15T23:18:38.864877760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 23:18:38.866038 containerd[1497]: time="2025-07-15T23:18:38.866012960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:18:38.964569 tar[1482]: linux-arm64/README.md Jul 15 23:18:38.983471 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 23:18:38.993426 containerd[1497]: time="2025-07-15T23:18:38.993309680Z" level=info msg="Start subscribing containerd event" Jul 15 23:18:38.993426 containerd[1497]: time="2025-07-15T23:18:38.993396520Z" level=info msg="Start recovering state" Jul 15 23:18:38.993532 containerd[1497]: time="2025-07-15T23:18:38.993498800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 23:18:38.993647 containerd[1497]: time="2025-07-15T23:18:38.993550720Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 23:18:38.993737 containerd[1497]: time="2025-07-15T23:18:38.993720440Z" level=info msg="Start event monitor" Jul 15 23:18:38.993809 containerd[1497]: time="2025-07-15T23:18:38.993798360Z" level=info msg="Start cni network conf syncer for default" Jul 15 23:18:38.993891 containerd[1497]: time="2025-07-15T23:18:38.993879120Z" level=info msg="Start streaming server" Jul 15 23:18:38.994145 containerd[1497]: time="2025-07-15T23:18:38.994127240Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 23:18:38.994207 containerd[1497]: time="2025-07-15T23:18:38.994195120Z" level=info msg="runtime interface starting up..." Jul 15 23:18:38.994345 containerd[1497]: time="2025-07-15T23:18:38.994271240Z" level=info msg="starting plugins..." Jul 15 23:18:38.994345 containerd[1497]: time="2025-07-15T23:18:38.994304960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 23:18:38.994570 containerd[1497]: time="2025-07-15T23:18:38.994538840Z" level=info msg="containerd successfully booted in 0.154926s" Jul 15 23:18:38.994647 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 23:18:39.049068 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 23:18:39.069072 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 23:18:39.072489 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 23:18:39.099580 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 23:18:39.099835 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 23:18:39.102539 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 23:18:39.126844 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 23:18:39.129702 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 23:18:39.131842 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 23:18:39.133165 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 23:18:39.706934 systemd-networkd[1425]: eth0: Gained IPv6LL Jul 15 23:18:39.709249 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 23:18:39.710987 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 23:18:39.714382 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 23:18:39.716892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:18:39.725380 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 23:18:39.740030 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 23:18:39.740260 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 23:18:39.741774 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 23:18:39.745780 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 23:18:40.288944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:18:40.290825 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 23:18:40.293705 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:18:40.296038 systemd[1]: Startup finished in 2.102s (kernel) + 6.061s (initrd) + 3.658s (userspace) = 11.823s. Jul 15 23:18:40.730494 kubelet[1605]: E0715 23:18:40.730426 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:18:40.732741 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:18:40.732893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:18:40.733966 systemd[1]: kubelet.service: Consumed 812ms CPU time, 257.7M memory peak. Jul 15 23:18:44.093206 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 23:18:44.094304 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:47816.service - OpenSSH per-connection server daemon (10.0.0.1:47816). Jul 15 23:18:44.229570 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 47816 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.231364 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.239261 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 23:18:44.240163 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 23:18:44.245478 systemd-logind[1473]: New session 1 of user core. Jul 15 23:18:44.259952 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 23:18:44.263976 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 23:18:44.280748 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 23:18:44.283020 systemd-logind[1473]: New session c1 of user core. Jul 15 23:18:44.410100 systemd[1623]: Queued start job for default target default.target. Jul 15 23:18:44.420796 systemd[1623]: Created slice app.slice - User Application Slice. Jul 15 23:18:44.420826 systemd[1623]: Reached target paths.target - Paths. Jul 15 23:18:44.420887 systemd[1623]: Reached target timers.target - Timers. Jul 15 23:18:44.422066 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 23:18:44.430691 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 23:18:44.430753 systemd[1623]: Reached target sockets.target - Sockets. Jul 15 23:18:44.430788 systemd[1623]: Reached target basic.target - Basic System. Jul 15 23:18:44.430816 systemd[1623]: Reached target default.target - Main User Target. Jul 15 23:18:44.430862 systemd[1623]: Startup finished in 142ms. Jul 15 23:18:44.430993 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 23:18:44.432313 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 23:18:44.495051 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:47832.service - OpenSSH per-connection server daemon (10.0.0.1:47832). Jul 15 23:18:44.540517 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 47832 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.542275 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.550861 systemd-logind[1473]: New session 2 of user core. Jul 15 23:18:44.568026 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 23:18:44.621611 sshd[1636]: Connection closed by 10.0.0.1 port 47832 Jul 15 23:18:44.622114 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:44.631971 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:47832.service: Deactivated successfully. Jul 15 23:18:44.634314 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 23:18:44.635140 systemd-logind[1473]: Session 2 logged out. Waiting for processes to exit. Jul 15 23:18:44.637809 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:47838.service - OpenSSH per-connection server daemon (10.0.0.1:47838). Jul 15 23:18:44.638463 systemd-logind[1473]: Removed session 2. Jul 15 23:18:44.689992 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 47838 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.691411 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.695506 systemd-logind[1473]: New session 3 of user core. Jul 15 23:18:44.705014 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 23:18:44.752895 sshd[1644]: Connection closed by 10.0.0.1 port 47838 Jul 15 23:18:44.753289 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:44.769084 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:47838.service: Deactivated successfully. Jul 15 23:18:44.770575 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 23:18:44.772353 systemd-logind[1473]: Session 3 logged out. Waiting for processes to exit. Jul 15 23:18:44.774752 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:47846.service - OpenSSH per-connection server daemon (10.0.0.1:47846). Jul 15 23:18:44.775246 systemd-logind[1473]: Removed session 3. Jul 15 23:18:44.823479 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 47846 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.824831 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.829055 systemd-logind[1473]: New session 4 of user core. Jul 15 23:18:44.840087 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 23:18:44.898121 sshd[1652]: Connection closed by 10.0.0.1 port 47846 Jul 15 23:18:44.898837 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:44.907923 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:47846.service: Deactivated successfully. Jul 15 23:18:44.909363 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 23:18:44.910822 systemd-logind[1473]: Session 4 logged out. Waiting for processes to exit. Jul 15 23:18:44.912004 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:47854.service - OpenSSH per-connection server daemon (10.0.0.1:47854). Jul 15 23:18:44.912768 systemd-logind[1473]: Removed session 4. Jul 15 23:18:44.981614 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 47854 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:44.982971 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:44.989129 systemd-logind[1473]: New session 5 of user core. Jul 15 23:18:44.998254 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 23:18:45.062268 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 23:18:45.062520 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:18:45.086706 sudo[1662]: pam_unix(sudo:session): session closed for user root Jul 15 23:18:45.088227 sshd[1661]: Connection closed by 10.0.0.1 port 47854 Jul 15 23:18:45.088609 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:45.103078 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:47854.service: Deactivated successfully. Jul 15 23:18:45.105083 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 23:18:45.105957 systemd-logind[1473]: Session 5 logged out. Waiting for processes to exit. Jul 15 23:18:45.108386 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:47870.service - OpenSSH per-connection server daemon (10.0.0.1:47870). Jul 15 23:18:45.109031 systemd-logind[1473]: Removed session 5. Jul 15 23:18:45.163662 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 47870 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:45.165181 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:45.169646 systemd-logind[1473]: New session 6 of user core. Jul 15 23:18:45.181041 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 23:18:45.233763 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 23:18:45.234062 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:18:45.306319 sudo[1672]: pam_unix(sudo:session): session closed for user root Jul 15 23:18:45.311530 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 23:18:45.311791 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:18:45.320188 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 23:18:45.360824 augenrules[1694]: No rules Jul 15 23:18:45.362018 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 23:18:45.362218 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 23:18:45.363101 sudo[1671]: pam_unix(sudo:session): session closed for user root Jul 15 23:18:45.364327 sshd[1670]: Connection closed by 10.0.0.1 port 47870 Jul 15 23:18:45.364677 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Jul 15 23:18:45.378955 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:47870.service: Deactivated successfully. Jul 15 23:18:45.380387 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 23:18:45.381294 systemd-logind[1473]: Session 6 logged out. Waiting for processes to exit. Jul 15 23:18:45.383310 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:47884.service - OpenSSH per-connection server daemon (10.0.0.1:47884). Jul 15 23:18:45.385402 systemd-logind[1473]: Removed session 6. Jul 15 23:18:45.440248 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 47884 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:18:45.441472 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:18:45.445563 systemd-logind[1473]: New session 7 of user core. Jul 15 23:18:45.456002 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 23:18:45.506721 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 23:18:45.507378 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 23:18:45.842359 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 23:18:45.855225 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 23:18:46.130540 dockerd[1727]: time="2025-07-15T23:18:46.130451175Z" level=info msg="Starting up" Jul 15 23:18:46.131918 dockerd[1727]: time="2025-07-15T23:18:46.131827133Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 23:18:46.183693 dockerd[1727]: time="2025-07-15T23:18:46.183621732Z" level=info msg="Loading containers: start." Jul 15 23:18:46.191882 kernel: Initializing XFRM netlink socket Jul 15 23:18:46.403149 systemd-networkd[1425]: docker0: Link UP Jul 15 23:18:46.407831 dockerd[1727]: time="2025-07-15T23:18:46.407755510Z" level=info msg="Loading containers: done." Jul 15 23:18:46.425657 dockerd[1727]: time="2025-07-15T23:18:46.425583733Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 23:18:46.425807 dockerd[1727]: time="2025-07-15T23:18:46.425690251Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 15 23:18:46.425807 dockerd[1727]: time="2025-07-15T23:18:46.425798757Z" level=info msg="Initializing buildkit" Jul 15 23:18:46.447041 dockerd[1727]: time="2025-07-15T23:18:46.446988488Z" level=info msg="Completed buildkit initialization" Jul 15 23:18:46.453603 dockerd[1727]: time="2025-07-15T23:18:46.453561430Z" level=info msg="Daemon has completed initialization" Jul 15 23:18:46.453676 dockerd[1727]: time="2025-07-15T23:18:46.453626583Z" level=info msg="API listen on /run/docker.sock" Jul 15 23:18:46.453802 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 23:18:47.043291 containerd[1497]: time="2025-07-15T23:18:47.043247959Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Jul 15 23:18:47.673919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293538501.mount: Deactivated successfully. Jul 15 23:18:48.653653 containerd[1497]: time="2025-07-15T23:18:48.653587397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:48.655253 containerd[1497]: time="2025-07-15T23:18:48.655220339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327783" Jul 15 23:18:48.655960 containerd[1497]: time="2025-07-15T23:18:48.655917449Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:48.659219 containerd[1497]: time="2025-07-15T23:18:48.659161084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:48.660132 containerd[1497]: time="2025-07-15T23:18:48.660011112Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 1.616722245s" Jul 15 23:18:48.660132 containerd[1497]: time="2025-07-15T23:18:48.660046833Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Jul 15 23:18:48.660645 containerd[1497]: time="2025-07-15T23:18:48.660591996Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Jul 15 23:18:49.701174 containerd[1497]: time="2025-07-15T23:18:49.701122665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:49.703039 containerd[1497]: time="2025-07-15T23:18:49.702998770Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529698" Jul 15 23:18:49.703997 containerd[1497]: time="2025-07-15T23:18:49.703971100Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:49.708152 containerd[1497]: time="2025-07-15T23:18:49.708110279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:49.709693 containerd[1497]: time="2025-07-15T23:18:49.709659285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.048953665s" Jul 15 23:18:49.709729 containerd[1497]: time="2025-07-15T23:18:49.709693744Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Jul 15 23:18:49.710208 containerd[1497]: time="2025-07-15T23:18:49.710187120Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Jul 15 23:18:50.859508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 23:18:50.862376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:18:51.002657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:18:51.006100 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:18:51.138975 kubelet[2009]: E0715 23:18:51.138830 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:18:51.142120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:18:51.142258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:18:51.143925 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.7M memory peak. Jul 15 23:18:51.150772 containerd[1497]: time="2025-07-15T23:18:51.150725455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:51.151662 containerd[1497]: time="2025-07-15T23:18:51.151633231Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484140" Jul 15 23:18:51.152352 containerd[1497]: time="2025-07-15T23:18:51.152281810Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:51.157599 containerd[1497]: time="2025-07-15T23:18:51.157546897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:51.158988 containerd[1497]: time="2025-07-15T23:18:51.158947193Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.44873262s" Jul 15 23:18:51.159040 containerd[1497]: time="2025-07-15T23:18:51.158988424Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Jul 15 23:18:51.159639 containerd[1497]: time="2025-07-15T23:18:51.159543376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Jul 15 23:18:52.164968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3064014655.mount: Deactivated successfully. Jul 15 23:18:52.383449 containerd[1497]: time="2025-07-15T23:18:52.383258793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:52.384170 containerd[1497]: time="2025-07-15T23:18:52.383953146Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378407" Jul 15 23:18:52.384745 containerd[1497]: time="2025-07-15T23:18:52.384711160Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:52.386592 containerd[1497]: time="2025-07-15T23:18:52.386564701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:52.387090 containerd[1497]: time="2025-07-15T23:18:52.387059736Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.227483203s" Jul 15 23:18:52.387141 containerd[1497]: time="2025-07-15T23:18:52.387092714Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Jul 15 23:18:52.387588 containerd[1497]: time="2025-07-15T23:18:52.387568059Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 23:18:53.008684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888878025.mount: Deactivated successfully. Jul 15 23:18:53.864254 containerd[1497]: time="2025-07-15T23:18:53.864203996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:53.864644 containerd[1497]: time="2025-07-15T23:18:53.864617069Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 23:18:53.865539 containerd[1497]: time="2025-07-15T23:18:53.865492863Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:53.870624 containerd[1497]: time="2025-07-15T23:18:53.870569149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:53.871853 containerd[1497]: time="2025-07-15T23:18:53.871694991Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.484084943s" Jul 15 23:18:53.871853 containerd[1497]: time="2025-07-15T23:18:53.871738725Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 23:18:53.872263 containerd[1497]: time="2025-07-15T23:18:53.872152522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 23:18:54.308605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1299460250.mount: Deactivated successfully. Jul 15 23:18:54.312845 containerd[1497]: time="2025-07-15T23:18:54.312793556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:18:54.314197 containerd[1497]: time="2025-07-15T23:18:54.314161737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 23:18:54.315117 containerd[1497]: time="2025-07-15T23:18:54.315085060Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:18:54.317513 containerd[1497]: time="2025-07-15T23:18:54.317469797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 23:18:54.318694 containerd[1497]: time="2025-07-15T23:18:54.318662286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 446.318098ms" Jul 15 23:18:54.318731 containerd[1497]: time="2025-07-15T23:18:54.318695816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 23:18:54.319142 containerd[1497]: time="2025-07-15T23:18:54.319115585Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 15 23:18:54.825376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859724139.mount: Deactivated successfully. Jul 15 23:18:56.312576 containerd[1497]: time="2025-07-15T23:18:56.312516720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:56.313184 containerd[1497]: time="2025-07-15T23:18:56.313148576Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 15 23:18:56.313934 containerd[1497]: time="2025-07-15T23:18:56.313903552Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:56.316630 containerd[1497]: time="2025-07-15T23:18:56.316577750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:18:56.317814 containerd[1497]: time="2025-07-15T23:18:56.317759184Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.99861269s" Jul 15 23:18:56.317911 containerd[1497]: time="2025-07-15T23:18:56.317815643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 15 23:19:01.359612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 15 23:19:01.361412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:19:01.526991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:19:01.542140 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 23:19:01.591667 kubelet[2167]: E0715 23:19:01.591605 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 23:19:01.594067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 23:19:01.594195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 23:19:01.594655 systemd[1]: kubelet.service: Consumed 129ms CPU time, 107.5M memory peak. Jul 15 23:19:01.993709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:19:01.993868 systemd[1]: kubelet.service: Consumed 129ms CPU time, 107.5M memory peak. Jul 15 23:19:01.996013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:19:02.017318 systemd[1]: Reload requested from client PID 2182 ('systemctl') (unit session-7.scope)... Jul 15 23:19:02.017333 systemd[1]: Reloading... Jul 15 23:19:02.085871 zram_generator::config[2228]: No configuration found. Jul 15 23:19:02.361058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:19:02.446221 systemd[1]: Reloading finished in 428 ms. Jul 15 23:19:02.508428 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 23:19:02.508521 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 23:19:02.508798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:19:02.508851 systemd[1]: kubelet.service: Consumed 86ms CPU time, 94.9M memory peak. Jul 15 23:19:02.510544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:19:02.634407 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:19:02.637467 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:19:02.671296 kubelet[2270]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:19:02.671296 kubelet[2270]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:19:02.671296 kubelet[2270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:19:02.671614 kubelet[2270]: I0715 23:19:02.671378 2270 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:19:03.369837 kubelet[2270]: I0715 23:19:03.369788 2270 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:19:03.369837 kubelet[2270]: I0715 23:19:03.369818 2270 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:19:03.370099 kubelet[2270]: I0715 23:19:03.370075 2270 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:19:03.424708 kubelet[2270]: E0715 23:19:03.424653 2270 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:19:03.426492 kubelet[2270]: I0715 23:19:03.426457 2270 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:19:03.437863 kubelet[2270]: I0715 23:19:03.437758 2270 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:19:03.441605 kubelet[2270]: I0715 23:19:03.441578 2270 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:19:03.442200 kubelet[2270]: I0715 23:19:03.442157 2270 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:19:03.442358 kubelet[2270]: I0715 23:19:03.442192 2270 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:19:03.442442 kubelet[2270]: I0715 23:19:03.442419 2270 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:19:03.442442 kubelet[2270]: I0715 23:19:03.442429 2270 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:19:03.442632 kubelet[2270]: I0715 23:19:03.442607 2270 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:19:03.446745 kubelet[2270]: I0715 23:19:03.446717 2270 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:19:03.446778 kubelet[2270]: I0715 23:19:03.446748 2270 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:19:03.446778 kubelet[2270]: I0715 23:19:03.446772 2270 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:19:03.446824 kubelet[2270]: I0715 23:19:03.446782 2270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:19:03.449563 kubelet[2270]: I0715 23:19:03.449410 2270 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:19:03.450083 kubelet[2270]: I0715 23:19:03.450052 2270 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:19:03.450184 kubelet[2270]: W0715 23:19:03.450169 2270 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 23:19:03.451007 kubelet[2270]: I0715 23:19:03.450969 2270 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:19:03.451007 kubelet[2270]: I0715 23:19:03.451011 2270 server.go:1287] "Started kubelet" Jul 15 23:19:03.453094 kubelet[2270]: W0715 23:19:03.451694 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 15 23:19:03.453094 kubelet[2270]: E0715 23:19:03.451751 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:19:03.453094 kubelet[2270]: I0715 23:19:03.452028 2270 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:19:03.453094 kubelet[2270]: I0715 23:19:03.452805 2270 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:19:03.453094 kubelet[2270]: I0715 23:19:03.452926 2270 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:19:03.453855 kubelet[2270]: W0715 23:19:03.453800 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 15 23:19:03.453957 kubelet[2270]: E0715 23:19:03.453939 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:19:03.454140 kubelet[2270]: I0715 23:19:03.454082 2270 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:19:03.454313 kubelet[2270]: I0715 23:19:03.454295 2270 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:19:03.454575 kubelet[2270]: I0715 23:19:03.454555 2270 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:19:03.454657 kubelet[2270]: I0715 23:19:03.454643 2270 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:19:03.454698 kubelet[2270]: I0715 23:19:03.454685 2270 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:19:03.455372 kubelet[2270]: I0715 23:19:03.455342 2270 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:19:03.455430 kubelet[2270]: I0715 23:19:03.455417 2270 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:19:03.455498 kubelet[2270]: E0715 23:19:03.455481 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:03.455876 kubelet[2270]: W0715 23:19:03.455767 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 15 23:19:03.455963 kubelet[2270]: E0715 23:19:03.455946 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:19:03.456704 kubelet[2270]: E0715 23:19:03.456673 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jul 15 23:19:03.456820 kubelet[2270]: I0715 23:19:03.456802 2270 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:19:03.459296 kubelet[2270]: I0715 23:19:03.458066 2270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:19:03.467173 kubelet[2270]: E0715 23:19:03.466845 2270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185290020b17c93c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 23:19:03.450986812 +0000 UTC m=+0.810816413,LastTimestamp:2025-07-15 23:19:03.450986812 +0000 UTC m=+0.810816413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 23:19:03.469128 kubelet[2270]: E0715 23:19:03.469103 2270 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:19:03.472432 kubelet[2270]: I0715 23:19:03.472407 2270 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:19:03.472432 kubelet[2270]: I0715 23:19:03.472421 2270 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:19:03.472546 kubelet[2270]: I0715 23:19:03.472529 2270 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:19:03.474479 kubelet[2270]: I0715 23:19:03.474426 2270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:19:03.475391 kubelet[2270]: I0715 23:19:03.475361 2270 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:19:03.475763 kubelet[2270]: I0715 23:19:03.475739 2270 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:19:03.475940 kubelet[2270]: I0715 23:19:03.475775 2270 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:19:03.475940 kubelet[2270]: I0715 23:19:03.475785 2270 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:19:03.475940 kubelet[2270]: E0715 23:19:03.475826 2270 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:19:03.476340 kubelet[2270]: W0715 23:19:03.476301 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 15 23:19:03.476386 kubelet[2270]: E0715 23:19:03.476348 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:19:03.556715 kubelet[2270]: E0715 23:19:03.556671 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:03.576894 kubelet[2270]: E0715 23:19:03.576864 2270 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 23:19:03.585649 kubelet[2270]: I0715 23:19:03.585612 2270 policy_none.go:49] "None policy: Start" Jul 15 23:19:03.585649 kubelet[2270]: I0715 23:19:03.585639 2270 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:19:03.585710 kubelet[2270]: I0715 23:19:03.585652 2270 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:19:03.591197 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 23:19:03.603562 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 23:19:03.630658 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 23:19:03.633082 kubelet[2270]: I0715 23:19:03.632896 2270 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:19:03.633157 kubelet[2270]: I0715 23:19:03.633101 2270 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:19:03.633157 kubelet[2270]: I0715 23:19:03.633115 2270 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:19:03.633655 kubelet[2270]: I0715 23:19:03.633418 2270 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:19:03.634452 kubelet[2270]: E0715 23:19:03.634426 2270 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:19:03.634586 kubelet[2270]: E0715 23:19:03.634539 2270 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 23:19:03.657768 kubelet[2270]: E0715 23:19:03.657741 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jul 15 23:19:03.734944 kubelet[2270]: I0715 23:19:03.734893 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:19:03.735364 kubelet[2270]: E0715 23:19:03.735330 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 15 23:19:03.784904 systemd[1]: Created slice kubepods-burstable-pod21b4bccd324c77aeb99af15e6e6e8783.slice - libcontainer container kubepods-burstable-pod21b4bccd324c77aeb99af15e6e6e8783.slice. Jul 15 23:19:03.814081 kubelet[2270]: E0715 23:19:03.814029 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:03.816669 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Jul 15 23:19:03.829241 kubelet[2270]: E0715 23:19:03.829076 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:03.831297 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Jul 15 23:19:03.833072 kubelet[2270]: E0715 23:19:03.833051 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:03.857399 kubelet[2270]: I0715 23:19:03.857369 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:03.857451 kubelet[2270]: I0715 23:19:03.857404 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:03.857451 kubelet[2270]: I0715 23:19:03.857426 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:03.857451 kubelet[2270]: I0715 23:19:03.857442 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21b4bccd324c77aeb99af15e6e6e8783-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b4bccd324c77aeb99af15e6e6e8783\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:03.857514 kubelet[2270]: I0715 23:19:03.857457 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21b4bccd324c77aeb99af15e6e6e8783-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b4bccd324c77aeb99af15e6e6e8783\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:03.857514 kubelet[2270]: I0715 23:19:03.857472 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21b4bccd324c77aeb99af15e6e6e8783-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"21b4bccd324c77aeb99af15e6e6e8783\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:03.857514 kubelet[2270]: I0715 23:19:03.857487 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:03.857514 kubelet[2270]: I0715 23:19:03.857502 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:03.857588 kubelet[2270]: I0715 23:19:03.857517 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:03.937035 kubelet[2270]: I0715 23:19:03.936935 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:19:03.937348 kubelet[2270]: E0715 23:19:03.937302 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 15 23:19:04.058482 kubelet[2270]: E0715 23:19:04.058443 2270 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jul 15 23:19:04.114846 kubelet[2270]: E0715 23:19:04.114749 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.115417 containerd[1497]: time="2025-07-15T23:19:04.115371056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:21b4bccd324c77aeb99af15e6e6e8783,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:04.129650 kubelet[2270]: E0715 23:19:04.129621 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.130194 containerd[1497]: time="2025-07-15T23:19:04.130059090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:04.134326 kubelet[2270]: E0715 23:19:04.134302 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.134795 containerd[1497]: time="2025-07-15T23:19:04.134750373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:04.142697 containerd[1497]: time="2025-07-15T23:19:04.142648553Z" level=info msg="connecting to shim 89739913c60ea07fa444191cb289f32e7ee670dcb422eadfc376d18ec1e94b73" address="unix:///run/containerd/s/76f090387b789095f0bda2b1d583161358baf6305cdd2a474a6bf0a183195b8a" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:04.163782 containerd[1497]: time="2025-07-15T23:19:04.162764457Z" level=info msg="connecting to shim 4ef33f5ea3de46d409ca0c9d5a004f30c323ab7d6ee4f811d3e60663f66b8d7e" address="unix:///run/containerd/s/3a33f5b350edef9cd73d4a474494d9f9d3ad97c98bd2dcd9bcd9585ff0e07859" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:04.168111 containerd[1497]: time="2025-07-15T23:19:04.168069684Z" level=info msg="connecting to shim f83b9003d282b2fc332dbfda6491f4c31024ff9594d075dd0f1890a8e0f32190" address="unix:///run/containerd/s/7a9a56bb2de532560e7f6bb7061262000449508897e1d6287c0f22b4871e24c2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:04.172968 systemd[1]: Started cri-containerd-89739913c60ea07fa444191cb289f32e7ee670dcb422eadfc376d18ec1e94b73.scope - libcontainer container 89739913c60ea07fa444191cb289f32e7ee670dcb422eadfc376d18ec1e94b73. Jul 15 23:19:04.198013 systemd[1]: Started cri-containerd-4ef33f5ea3de46d409ca0c9d5a004f30c323ab7d6ee4f811d3e60663f66b8d7e.scope - libcontainer container 4ef33f5ea3de46d409ca0c9d5a004f30c323ab7d6ee4f811d3e60663f66b8d7e. Jul 15 23:19:04.199065 systemd[1]: Started cri-containerd-f83b9003d282b2fc332dbfda6491f4c31024ff9594d075dd0f1890a8e0f32190.scope - libcontainer container f83b9003d282b2fc332dbfda6491f4c31024ff9594d075dd0f1890a8e0f32190. Jul 15 23:19:04.213461 containerd[1497]: time="2025-07-15T23:19:04.213319614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:21b4bccd324c77aeb99af15e6e6e8783,Namespace:kube-system,Attempt:0,} returns sandbox id \"89739913c60ea07fa444191cb289f32e7ee670dcb422eadfc376d18ec1e94b73\"" Jul 15 23:19:04.214715 kubelet[2270]: E0715 23:19:04.214685 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.216669 containerd[1497]: time="2025-07-15T23:19:04.216630128Z" level=info msg="CreateContainer within sandbox \"89739913c60ea07fa444191cb289f32e7ee670dcb422eadfc376d18ec1e94b73\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 23:19:04.233244 containerd[1497]: time="2025-07-15T23:19:04.233205691Z" level=info msg="Container b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:04.239544 containerd[1497]: time="2025-07-15T23:19:04.239509935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef33f5ea3de46d409ca0c9d5a004f30c323ab7d6ee4f811d3e60663f66b8d7e\"" Jul 15 23:19:04.240349 kubelet[2270]: E0715 23:19:04.240301 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.242877 containerd[1497]: time="2025-07-15T23:19:04.242480795Z" level=info msg="CreateContainer within sandbox \"4ef33f5ea3de46d409ca0c9d5a004f30c323ab7d6ee4f811d3e60663f66b8d7e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 23:19:04.243021 containerd[1497]: time="2025-07-15T23:19:04.242985430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83b9003d282b2fc332dbfda6491f4c31024ff9594d075dd0f1890a8e0f32190\"" Jul 15 23:19:04.243512 containerd[1497]: time="2025-07-15T23:19:04.243460427Z" level=info msg="CreateContainer within sandbox \"89739913c60ea07fa444191cb289f32e7ee670dcb422eadfc376d18ec1e94b73\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef\"" Jul 15 23:19:04.243766 kubelet[2270]: E0715 23:19:04.243749 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.243992 containerd[1497]: time="2025-07-15T23:19:04.243966104Z" level=info msg="StartContainer for \"b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef\"" Jul 15 23:19:04.245016 containerd[1497]: time="2025-07-15T23:19:04.244978220Z" level=info msg="connecting to shim b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef" address="unix:///run/containerd/s/76f090387b789095f0bda2b1d583161358baf6305cdd2a474a6bf0a183195b8a" protocol=ttrpc version=3 Jul 15 23:19:04.246816 containerd[1497]: time="2025-07-15T23:19:04.246792250Z" level=info msg="CreateContainer within sandbox \"f83b9003d282b2fc332dbfda6491f4c31024ff9594d075dd0f1890a8e0f32190\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 23:19:04.250145 containerd[1497]: time="2025-07-15T23:19:04.249802201Z" level=info msg="Container 65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:04.256757 containerd[1497]: time="2025-07-15T23:19:04.256723712Z" level=info msg="Container 750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:04.262984 systemd[1]: Started cri-containerd-b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef.scope - libcontainer container b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef. Jul 15 23:19:04.263396 containerd[1497]: time="2025-07-15T23:19:04.263362565Z" level=info msg="CreateContainer within sandbox \"4ef33f5ea3de46d409ca0c9d5a004f30c323ab7d6ee4f811d3e60663f66b8d7e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d\"" Jul 15 23:19:04.263989 containerd[1497]: time="2025-07-15T23:19:04.263961327Z" level=info msg="StartContainer for \"65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d\"" Jul 15 23:19:04.265198 containerd[1497]: time="2025-07-15T23:19:04.265143310Z" level=info msg="connecting to shim 65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d" address="unix:///run/containerd/s/3a33f5b350edef9cd73d4a474494d9f9d3ad97c98bd2dcd9bcd9585ff0e07859" protocol=ttrpc version=3 Jul 15 23:19:04.266325 containerd[1497]: time="2025-07-15T23:19:04.266298137Z" level=info msg="CreateContainer within sandbox \"f83b9003d282b2fc332dbfda6491f4c31024ff9594d075dd0f1890a8e0f32190\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a\"" Jul 15 23:19:04.266813 containerd[1497]: time="2025-07-15T23:19:04.266794241Z" level=info msg="StartContainer for \"750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a\"" Jul 15 23:19:04.268535 containerd[1497]: time="2025-07-15T23:19:04.268504372Z" level=info msg="connecting to shim 750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a" address="unix:///run/containerd/s/7a9a56bb2de532560e7f6bb7061262000449508897e1d6287c0f22b4871e24c2" protocol=ttrpc version=3 Jul 15 23:19:04.285970 systemd[1]: Started cri-containerd-65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d.scope - libcontainer container 65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d. Jul 15 23:19:04.289123 systemd[1]: Started cri-containerd-750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a.scope - libcontainer container 750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a. Jul 15 23:19:04.311533 containerd[1497]: time="2025-07-15T23:19:04.310414109Z" level=info msg="StartContainer for \"b7048e089c132f42090aa2a2b6699c00db5012b19dba47272f26901d9b1d5cef\" returns successfully" Jul 15 23:19:04.341135 kubelet[2270]: I0715 23:19:04.341003 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:19:04.341529 kubelet[2270]: E0715 23:19:04.341453 2270 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 15 23:19:04.371661 containerd[1497]: time="2025-07-15T23:19:04.370776202Z" level=info msg="StartContainer for \"65b730720079c0faeebae35698608b32a96037748f08a081ac4d29c590295b2d\" returns successfully" Jul 15 23:19:04.372177 containerd[1497]: time="2025-07-15T23:19:04.372118359Z" level=info msg="StartContainer for \"750317197cecb68de47af15cafacdd8e38e1273602dc604d66921abba98b105a\" returns successfully" Jul 15 23:19:04.410550 kubelet[2270]: W0715 23:19:04.410492 2270 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.74:6443: connect: connection refused Jul 15 23:19:04.410641 kubelet[2270]: E0715 23:19:04.410566 2270 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" Jul 15 23:19:04.482194 kubelet[2270]: E0715 23:19:04.482088 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:04.482194 kubelet[2270]: E0715 23:19:04.482218 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.487451 kubelet[2270]: E0715 23:19:04.487372 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:04.488667 kubelet[2270]: E0715 23:19:04.488603 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:04.489397 kubelet[2270]: E0715 23:19:04.489381 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:04.489666 kubelet[2270]: E0715 23:19:04.489653 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:05.144851 kubelet[2270]: I0715 23:19:05.143652 2270 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:19:05.492950 kubelet[2270]: E0715 23:19:05.491576 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:05.492950 kubelet[2270]: E0715 23:19:05.491716 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:05.492950 kubelet[2270]: E0715 23:19:05.491928 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:05.492950 kubelet[2270]: E0715 23:19:05.492053 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:05.955976 kubelet[2270]: E0715 23:19:05.955920 2270 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 23:19:06.041277 kubelet[2270]: I0715 23:19:06.041180 2270 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:19:06.041277 kubelet[2270]: E0715 23:19:06.041214 2270 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 23:19:06.050505 kubelet[2270]: E0715 23:19:06.050474 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.151220 kubelet[2270]: E0715 23:19:06.151186 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.251986 kubelet[2270]: E0715 23:19:06.251884 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.352592 kubelet[2270]: E0715 23:19:06.352554 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.452665 kubelet[2270]: E0715 23:19:06.452629 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.492380 kubelet[2270]: E0715 23:19:06.492323 2270 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 15 23:19:06.492495 kubelet[2270]: E0715 23:19:06.492448 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:06.553622 kubelet[2270]: E0715 23:19:06.553533 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.654457 kubelet[2270]: E0715 23:19:06.654422 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.755032 kubelet[2270]: E0715 23:19:06.755000 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.855688 kubelet[2270]: E0715 23:19:06.855612 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:06.957852 kubelet[2270]: E0715 23:19:06.955898 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:07.056473 kubelet[2270]: E0715 23:19:07.056439 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:07.157314 kubelet[2270]: E0715 23:19:07.157271 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:07.258029 kubelet[2270]: E0715 23:19:07.257985 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:07.358800 kubelet[2270]: E0715 23:19:07.358760 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:07.459409 kubelet[2270]: E0715 23:19:07.459314 2270 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:07.557281 kubelet[2270]: I0715 23:19:07.557223 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:07.573994 kubelet[2270]: I0715 23:19:07.573857 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:07.577539 kubelet[2270]: I0715 23:19:07.577495 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:07.949854 kubelet[2270]: I0715 23:19:07.949813 2270 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:07.954388 kubelet[2270]: E0715 23:19:07.954361 2270 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:07.954538 kubelet[2270]: E0715 23:19:07.954522 2270 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:08.098945 systemd[1]: Reload requested from client PID 2548 ('systemctl') (unit session-7.scope)... Jul 15 23:19:08.098959 systemd[1]: Reloading... Jul 15 23:19:08.176883 zram_generator::config[2594]: No configuration found. Jul 15 23:19:08.239611 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 23:19:08.340963 systemd[1]: Reloading finished in 241 ms. Jul 15 23:19:08.369076 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:19:08.387855 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 23:19:08.388820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:19:08.388886 systemd[1]: kubelet.service: Consumed 1.194s CPU time, 130.2M memory peak. Jul 15 23:19:08.390549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 23:19:08.515411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 23:19:08.519378 (kubelet)[2633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 23:19:08.557540 kubelet[2633]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:19:08.557540 kubelet[2633]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 15 23:19:08.557540 kubelet[2633]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 23:19:08.557933 kubelet[2633]: I0715 23:19:08.557567 2633 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 23:19:08.563703 kubelet[2633]: I0715 23:19:08.563638 2633 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 15 23:19:08.563703 kubelet[2633]: I0715 23:19:08.563670 2633 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 23:19:08.564884 kubelet[2633]: I0715 23:19:08.564758 2633 server.go:954] "Client rotation is on, will bootstrap in background" Jul 15 23:19:08.566436 kubelet[2633]: I0715 23:19:08.566397 2633 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 23:19:08.569456 kubelet[2633]: I0715 23:19:08.569391 2633 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 23:19:08.573301 kubelet[2633]: I0715 23:19:08.573281 2633 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 23:19:08.576107 kubelet[2633]: I0715 23:19:08.576085 2633 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 23:19:08.576930 kubelet[2633]: I0715 23:19:08.576893 2633 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 23:19:08.577096 kubelet[2633]: I0715 23:19:08.576932 2633 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 23:19:08.577162 kubelet[2633]: I0715 23:19:08.577110 2633 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 23:19:08.577162 kubelet[2633]: I0715 23:19:08.577119 2633 container_manager_linux.go:304] "Creating device plugin manager" Jul 15 23:19:08.577217 kubelet[2633]: I0715 23:19:08.577165 2633 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:19:08.577312 kubelet[2633]: I0715 23:19:08.577300 2633 kubelet.go:446] "Attempting to sync node with API server" Jul 15 23:19:08.577366 kubelet[2633]: I0715 23:19:08.577317 2633 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 23:19:08.577366 kubelet[2633]: I0715 23:19:08.577348 2633 kubelet.go:352] "Adding apiserver pod source" Jul 15 23:19:08.578877 kubelet[2633]: I0715 23:19:08.577994 2633 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 23:19:08.579837 kubelet[2633]: I0715 23:19:08.579716 2633 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 15 23:19:08.580735 kubelet[2633]: I0715 23:19:08.580695 2633 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 23:19:08.582844 kubelet[2633]: I0715 23:19:08.581256 2633 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 15 23:19:08.582844 kubelet[2633]: I0715 23:19:08.581669 2633 server.go:1287] "Started kubelet" Jul 15 23:19:08.582844 kubelet[2633]: I0715 23:19:08.581901 2633 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 23:19:08.584669 kubelet[2633]: I0715 23:19:08.582305 2633 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 23:19:08.584669 kubelet[2633]: I0715 23:19:08.583702 2633 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 23:19:08.588247 kubelet[2633]: I0715 23:19:08.588219 2633 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 23:19:08.588763 kubelet[2633]: I0715 23:19:08.588744 2633 server.go:479] "Adding debug handlers to kubelet server" Jul 15 23:19:08.589957 kubelet[2633]: I0715 23:19:08.589929 2633 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 23:19:08.590695 kubelet[2633]: I0715 23:19:08.590673 2633 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 15 23:19:08.596378 kubelet[2633]: I0715 23:19:08.590804 2633 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 15 23:19:08.596378 kubelet[2633]: E0715 23:19:08.590965 2633 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 23:19:08.596378 kubelet[2633]: I0715 23:19:08.594145 2633 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 23:19:08.596714 kubelet[2633]: I0715 23:19:08.596686 2633 reconciler.go:26] "Reconciler: start to sync state" Jul 15 23:19:08.602759 kubelet[2633]: E0715 23:19:08.602222 2633 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 23:19:08.604106 kubelet[2633]: I0715 23:19:08.604065 2633 factory.go:221] Registration of the containerd container factory successfully Jul 15 23:19:08.604106 kubelet[2633]: I0715 23:19:08.604086 2633 factory.go:221] Registration of the systemd container factory successfully Jul 15 23:19:08.615865 kubelet[2633]: I0715 23:19:08.615632 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 23:19:08.616816 kubelet[2633]: I0715 23:19:08.616796 2633 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 23:19:08.616939 kubelet[2633]: I0715 23:19:08.616928 2633 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 15 23:19:08.617107 kubelet[2633]: I0715 23:19:08.617096 2633 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 15 23:19:08.617644 kubelet[2633]: I0715 23:19:08.617159 2633 kubelet.go:2382] "Starting kubelet main sync loop" Jul 15 23:19:08.617644 kubelet[2633]: E0715 23:19:08.617199 2633 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 23:19:08.636478 kubelet[2633]: I0715 23:19:08.636455 2633 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 15 23:19:08.636688 kubelet[2633]: I0715 23:19:08.636651 2633 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 15 23:19:08.636688 kubelet[2633]: I0715 23:19:08.636681 2633 state_mem.go:36] "Initialized new in-memory state store" Jul 15 23:19:08.636911 kubelet[2633]: I0715 23:19:08.636894 2633 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 23:19:08.636991 kubelet[2633]: I0715 23:19:08.636968 2633 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 23:19:08.637045 kubelet[2633]: I0715 23:19:08.637038 2633 policy_none.go:49] "None policy: Start" Jul 15 23:19:08.637088 kubelet[2633]: I0715 23:19:08.637082 2633 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 15 23:19:08.637136 kubelet[2633]: I0715 23:19:08.637129 2633 state_mem.go:35] "Initializing new in-memory state store" Jul 15 23:19:08.637303 kubelet[2633]: I0715 23:19:08.637293 2633 state_mem.go:75] "Updated machine memory state" Jul 15 23:19:08.641741 kubelet[2633]: I0715 23:19:08.641716 2633 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 23:19:08.641943 kubelet[2633]: I0715 23:19:08.641922 2633 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 23:19:08.641981 kubelet[2633]: I0715 23:19:08.641940 2633 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 23:19:08.642228 kubelet[2633]: I0715 23:19:08.642208 2633 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 23:19:08.644517 kubelet[2633]: E0715 23:19:08.644501 2633 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 15 23:19:08.717954 kubelet[2633]: I0715 23:19:08.717903 2633 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:08.718159 kubelet[2633]: I0715 23:19:08.717903 2633 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:08.718159 kubelet[2633]: I0715 23:19:08.718039 2633 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:08.722744 kubelet[2633]: E0715 23:19:08.722710 2633 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:08.723176 kubelet[2633]: E0715 23:19:08.723113 2633 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:08.723176 kubelet[2633]: E0715 23:19:08.723117 2633 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:08.745355 kubelet[2633]: I0715 23:19:08.745334 2633 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 15 23:19:08.752690 kubelet[2633]: I0715 23:19:08.752568 2633 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 15 23:19:08.752758 kubelet[2633]: I0715 23:19:08.752739 2633 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 15 23:19:08.898558 kubelet[2633]: I0715 23:19:08.898516 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:08.898558 kubelet[2633]: I0715 23:19:08.898558 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:08.898719 kubelet[2633]: I0715 23:19:08.898581 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:08.898719 kubelet[2633]: I0715 23:19:08.898599 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21b4bccd324c77aeb99af15e6e6e8783-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b4bccd324c77aeb99af15e6e6e8783\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:08.898719 kubelet[2633]: I0715 23:19:08.898618 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21b4bccd324c77aeb99af15e6e6e8783-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b4bccd324c77aeb99af15e6e6e8783\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:08.898719 kubelet[2633]: I0715 23:19:08.898635 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:08.898719 kubelet[2633]: I0715 23:19:08.898650 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:08.898864 kubelet[2633]: I0715 23:19:08.898666 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21b4bccd324c77aeb99af15e6e6e8783-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"21b4bccd324c77aeb99af15e6e6e8783\") " pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:08.898864 kubelet[2633]: I0715 23:19:08.898682 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 23:19:09.023819 kubelet[2633]: E0715 23:19:09.023704 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:09.023819 kubelet[2633]: E0715 23:19:09.023751 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:09.023819 kubelet[2633]: E0715 23:19:09.023709 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:09.143868 sudo[2671]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 23:19:09.144148 sudo[2671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 23:19:09.579597 kubelet[2633]: I0715 23:19:09.579356 2633 apiserver.go:52] "Watching apiserver" Jul 15 23:19:09.597039 kubelet[2633]: I0715 23:19:09.596981 2633 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 15 23:19:09.598682 sudo[2671]: pam_unix(sudo:session): session closed for user root Jul 15 23:19:09.629370 kubelet[2633]: E0715 23:19:09.629225 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:09.629370 kubelet[2633]: I0715 23:19:09.629332 2633 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:09.629370 kubelet[2633]: I0715 23:19:09.629373 2633 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:09.636609 kubelet[2633]: E0715 23:19:09.636579 2633 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 23:19:09.637061 kubelet[2633]: E0715 23:19:09.636732 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:09.637061 kubelet[2633]: E0715 23:19:09.636906 2633 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 15 23:19:09.637061 kubelet[2633]: E0715 23:19:09.636990 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:09.647757 kubelet[2633]: I0715 23:19:09.647667 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6476535 podStartE2EDuration="2.6476535s" podCreationTimestamp="2025-07-15 23:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:19:09.647455924 +0000 UTC m=+1.124202385" watchObservedRunningTime="2025-07-15 23:19:09.6476535 +0000 UTC m=+1.124399961" Jul 15 23:19:09.654756 kubelet[2633]: I0715 23:19:09.654465 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.654454015 podStartE2EDuration="2.654454015s" podCreationTimestamp="2025-07-15 23:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:19:09.654367116 +0000 UTC m=+1.131113537" watchObservedRunningTime="2025-07-15 23:19:09.654454015 +0000 UTC m=+1.131200436" Jul 15 23:19:09.661894 kubelet[2633]: I0715 23:19:09.661852 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.661842575 podStartE2EDuration="2.661842575s" podCreationTimestamp="2025-07-15 23:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:19:09.661640196 +0000 UTC m=+1.138386657" watchObservedRunningTime="2025-07-15 23:19:09.661842575 +0000 UTC m=+1.138589036" Jul 15 23:19:10.631181 kubelet[2633]: E0715 23:19:10.631151 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:10.631518 kubelet[2633]: E0715 23:19:10.631186 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:11.244871 sudo[1706]: pam_unix(sudo:session): session closed for user root Jul 15 23:19:11.247861 sshd[1705]: Connection closed by 10.0.0.1 port 47884 Jul 15 23:19:11.248099 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:11.251879 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:47884.service: Deactivated successfully. Jul 15 23:19:11.253490 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 23:19:11.253694 systemd[1]: session-7.scope: Consumed 7.912s CPU time, 258M memory peak. Jul 15 23:19:11.254578 systemd-logind[1473]: Session 7 logged out. Waiting for processes to exit. Jul 15 23:19:11.255557 systemd-logind[1473]: Removed session 7. Jul 15 23:19:14.065291 kubelet[2633]: E0715 23:19:14.065213 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:14.383821 kubelet[2633]: I0715 23:19:14.383781 2633 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 23:19:14.389169 containerd[1497]: time="2025-07-15T23:19:14.389140288Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 23:19:14.389683 kubelet[2633]: I0715 23:19:14.389576 2633 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 23:19:14.638484 kubelet[2633]: E0715 23:19:14.638388 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.218975 systemd[1]: Created slice kubepods-besteffort-pod91c2c19d_0717_4aba_9433_917973bb56e5.slice - libcontainer container kubepods-besteffort-pod91c2c19d_0717_4aba_9433_917973bb56e5.slice. Jul 15 23:19:15.231942 systemd[1]: Created slice kubepods-burstable-poddeafc9e4_ed7f_4899_9688_a72201e01351.slice - libcontainer container kubepods-burstable-poddeafc9e4_ed7f_4899_9688_a72201e01351.slice. Jul 15 23:19:15.246404 kubelet[2633]: I0715 23:19:15.246374 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-kernel\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247372 kubelet[2633]: I0715 23:19:15.246655 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-config-path\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247372 kubelet[2633]: I0715 23:19:15.246789 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91c2c19d-0717-4aba-9433-917973bb56e5-lib-modules\") pod \"kube-proxy-w595b\" (UID: \"91c2c19d-0717-4aba-9433-917973bb56e5\") " pod="kube-system/kube-proxy-w595b" Jul 15 23:19:15.247372 kubelet[2633]: I0715 23:19:15.246877 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-etc-cni-netd\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247372 kubelet[2633]: I0715 23:19:15.246916 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91c2c19d-0717-4aba-9433-917973bb56e5-xtables-lock\") pod \"kube-proxy-w595b\" (UID: \"91c2c19d-0717-4aba-9433-917973bb56e5\") " pod="kube-system/kube-proxy-w595b" Jul 15 23:19:15.247372 kubelet[2633]: I0715 23:19:15.246935 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-lib-modules\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247372 kubelet[2633]: I0715 23:19:15.246956 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/deafc9e4-ed7f-4899-9688-a72201e01351-clustermesh-secrets\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247524 kubelet[2633]: I0715 23:19:15.246978 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-run\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247524 kubelet[2633]: I0715 23:19:15.246996 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-hostproc\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247524 kubelet[2633]: I0715 23:19:15.247036 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cni-path\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247524 kubelet[2633]: I0715 23:19:15.247062 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-xtables-lock\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247524 kubelet[2633]: I0715 23:19:15.247079 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5bpt\" (UniqueName: \"kubernetes.io/projected/91c2c19d-0717-4aba-9433-917973bb56e5-kube-api-access-j5bpt\") pod \"kube-proxy-w595b\" (UID: \"91c2c19d-0717-4aba-9433-917973bb56e5\") " pod="kube-system/kube-proxy-w595b" Jul 15 23:19:15.247524 kubelet[2633]: I0715 23:19:15.247097 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-cgroup\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247642 kubelet[2633]: I0715 23:19:15.247111 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-net\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247642 kubelet[2633]: I0715 23:19:15.247131 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-hubble-tls\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247728 kubelet[2633]: I0715 23:19:15.247698 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb94k\" (UniqueName: \"kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-kube-api-access-hb94k\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.247757 kubelet[2633]: I0715 23:19:15.247738 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91c2c19d-0717-4aba-9433-917973bb56e5-kube-proxy\") pod \"kube-proxy-w595b\" (UID: \"91c2c19d-0717-4aba-9433-917973bb56e5\") " pod="kube-system/kube-proxy-w595b" Jul 15 23:19:15.247778 kubelet[2633]: I0715 23:19:15.247761 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-bpf-maps\") pod \"cilium-qq7rc\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " pod="kube-system/cilium-qq7rc" Jul 15 23:19:15.475635 systemd[1]: Created slice kubepods-besteffort-pod826b1ca1_c1e3_491e_88d6_f438d2a4965e.slice - libcontainer container kubepods-besteffort-pod826b1ca1_c1e3_491e_88d6_f438d2a4965e.slice. Jul 15 23:19:15.533002 kubelet[2633]: E0715 23:19:15.532935 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.533593 containerd[1497]: time="2025-07-15T23:19:15.533558554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w595b,Uid:91c2c19d-0717-4aba-9433-917973bb56e5,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:15.535033 kubelet[2633]: E0715 23:19:15.535001 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.535467 containerd[1497]: time="2025-07-15T23:19:15.535444932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qq7rc,Uid:deafc9e4-ed7f-4899-9688-a72201e01351,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:15.550430 kubelet[2633]: I0715 23:19:15.550396 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/826b1ca1-c1e3-491e-88d6-f438d2a4965e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-jhdwx\" (UID: \"826b1ca1-c1e3-491e-88d6-f438d2a4965e\") " pod="kube-system/cilium-operator-6c4d7847fc-jhdwx" Jul 15 23:19:15.550660 kubelet[2633]: I0715 23:19:15.550625 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dxs\" (UniqueName: \"kubernetes.io/projected/826b1ca1-c1e3-491e-88d6-f438d2a4965e-kube-api-access-92dxs\") pod \"cilium-operator-6c4d7847fc-jhdwx\" (UID: \"826b1ca1-c1e3-491e-88d6-f438d2a4965e\") " pod="kube-system/cilium-operator-6c4d7847fc-jhdwx" Jul 15 23:19:15.559400 containerd[1497]: time="2025-07-15T23:19:15.559339991Z" level=info msg="connecting to shim 005ef218f0b77fb4f05348404d3fe0e5e913baf2bc8b724f2eafd8e03f679a7e" address="unix:///run/containerd/s/71a0e0614ed4dc4d56ee948ea40b486eefbb6555235df4a7ad9996bcea0af420" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:15.562449 containerd[1497]: time="2025-07-15T23:19:15.562401973Z" level=info msg="connecting to shim 9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad" address="unix:///run/containerd/s/ded914c9b9f790f55d6fdc37ccf399318c969e0fbf3c520e2f330396af0900cf" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:15.583073 systemd[1]: Started cri-containerd-005ef218f0b77fb4f05348404d3fe0e5e913baf2bc8b724f2eafd8e03f679a7e.scope - libcontainer container 005ef218f0b77fb4f05348404d3fe0e5e913baf2bc8b724f2eafd8e03f679a7e. Jul 15 23:19:15.585761 systemd[1]: Started cri-containerd-9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad.scope - libcontainer container 9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad. Jul 15 23:19:15.607969 containerd[1497]: time="2025-07-15T23:19:15.607932778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w595b,Uid:91c2c19d-0717-4aba-9433-917973bb56e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"005ef218f0b77fb4f05348404d3fe0e5e913baf2bc8b724f2eafd8e03f679a7e\"" Jul 15 23:19:15.608700 kubelet[2633]: E0715 23:19:15.608680 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.618401 containerd[1497]: time="2025-07-15T23:19:15.617629418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qq7rc,Uid:deafc9e4-ed7f-4899-9688-a72201e01351,Namespace:kube-system,Attempt:0,} returns sandbox id \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\"" Jul 15 23:19:15.619023 kubelet[2633]: E0715 23:19:15.618991 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.619862 containerd[1497]: time="2025-07-15T23:19:15.619808126Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 23:19:15.621881 containerd[1497]: time="2025-07-15T23:19:15.621825899Z" level=info msg="CreateContainer within sandbox \"005ef218f0b77fb4f05348404d3fe0e5e913baf2bc8b724f2eafd8e03f679a7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 23:19:15.631670 containerd[1497]: time="2025-07-15T23:19:15.631636967Z" level=info msg="Container 1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:15.638378 containerd[1497]: time="2025-07-15T23:19:15.638156399Z" level=info msg="CreateContainer within sandbox \"005ef218f0b77fb4f05348404d3fe0e5e913baf2bc8b724f2eafd8e03f679a7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45\"" Jul 15 23:19:15.639694 containerd[1497]: time="2025-07-15T23:19:15.638847321Z" level=info msg="StartContainer for \"1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45\"" Jul 15 23:19:15.641959 containerd[1497]: time="2025-07-15T23:19:15.641858472Z" level=info msg="connecting to shim 1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45" address="unix:///run/containerd/s/71a0e0614ed4dc4d56ee948ea40b486eefbb6555235df4a7ad9996bcea0af420" protocol=ttrpc version=3 Jul 15 23:19:15.644534 kubelet[2633]: E0715 23:19:15.644499 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.667031 systemd[1]: Started cri-containerd-1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45.scope - libcontainer container 1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45. Jul 15 23:19:15.699940 containerd[1497]: time="2025-07-15T23:19:15.699897594Z" level=info msg="StartContainer for \"1d010caa16727e2e1f9685dd8b1535556d0d26283bbc4992db3a109378078b45\" returns successfully" Jul 15 23:19:15.779759 kubelet[2633]: E0715 23:19:15.779631 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.783920 containerd[1497]: time="2025-07-15T23:19:15.781423577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jhdwx,Uid:826b1ca1-c1e3-491e-88d6-f438d2a4965e,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:15.832347 containerd[1497]: time="2025-07-15T23:19:15.832305736Z" level=info msg="connecting to shim ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023" address="unix:///run/containerd/s/d789d0153137f942dcd60a8e84165cbcf0630be8d8e4f0710f8ef9a0be0d2784" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:15.853348 kubelet[2633]: E0715 23:19:15.853310 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:15.861128 systemd[1]: Started cri-containerd-ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023.scope - libcontainer container ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023. Jul 15 23:19:15.906652 containerd[1497]: time="2025-07-15T23:19:15.906610038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-jhdwx,Uid:826b1ca1-c1e3-491e-88d6-f438d2a4965e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\"" Jul 15 23:19:15.907450 kubelet[2633]: E0715 23:19:15.907424 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:16.649127 kubelet[2633]: E0715 23:19:16.649088 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:16.652853 kubelet[2633]: E0715 23:19:16.651767 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:16.660394 kubelet[2633]: I0715 23:19:16.660334 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w595b" podStartSLOduration=1.660318325 podStartE2EDuration="1.660318325s" podCreationTimestamp="2025-07-15 23:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:19:16.660214348 +0000 UTC m=+8.136960809" watchObservedRunningTime="2025-07-15 23:19:16.660318325 +0000 UTC m=+8.137064826" Jul 15 23:19:17.653739 kubelet[2633]: E0715 23:19:17.653701 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:18.865223 kubelet[2633]: E0715 23:19:18.864692 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:19.037207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492760744.mount: Deactivated successfully. Jul 15 23:19:19.655647 kubelet[2633]: E0715 23:19:19.655620 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:20.433529 containerd[1497]: time="2025-07-15T23:19:20.433477695Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:19:20.434014 containerd[1497]: time="2025-07-15T23:19:20.433981678Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 15 23:19:20.434874 containerd[1497]: time="2025-07-15T23:19:20.434848423Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:19:20.436126 containerd[1497]: time="2025-07-15T23:19:20.436087613Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.816233101s" Jul 15 23:19:20.436314 containerd[1497]: time="2025-07-15T23:19:20.436217230Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 23:19:20.442184 containerd[1497]: time="2025-07-15T23:19:20.442147261Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 23:19:20.445114 containerd[1497]: time="2025-07-15T23:19:20.445061354Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:19:20.454231 containerd[1497]: time="2025-07-15T23:19:20.454191005Z" level=info msg="Container d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:20.458314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241026920.mount: Deactivated successfully. Jul 15 23:19:20.471519 containerd[1497]: time="2025-07-15T23:19:20.471393317Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\"" Jul 15 23:19:20.476637 containerd[1497]: time="2025-07-15T23:19:20.476278484Z" level=info msg="StartContainer for \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\"" Jul 15 23:19:20.485094 containerd[1497]: time="2025-07-15T23:19:20.485052097Z" level=info msg="connecting to shim d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786" address="unix:///run/containerd/s/ded914c9b9f790f55d6fdc37ccf399318c969e0fbf3c520e2f330396af0900cf" protocol=ttrpc version=3 Jul 15 23:19:20.530008 systemd[1]: Started cri-containerd-d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786.scope - libcontainer container d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786. Jul 15 23:19:20.555926 containerd[1497]: time="2025-07-15T23:19:20.555887044Z" level=info msg="StartContainer for \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" returns successfully" Jul 15 23:19:20.628455 systemd[1]: cri-containerd-d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786.scope: Deactivated successfully. Jul 15 23:19:20.657745 containerd[1497]: time="2025-07-15T23:19:20.657697175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" id:\"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" pid:3061 exited_at:{seconds:1752621560 nanos:652350362}" Jul 15 23:19:20.659398 kubelet[2633]: E0715 23:19:20.659362 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:20.665438 containerd[1497]: time="2025-07-15T23:19:20.663063275Z" level=info msg="received exit event container_id:\"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" id:\"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" pid:3061 exited_at:{seconds:1752621560 nanos:652350362}" Jul 15 23:19:21.452068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786-rootfs.mount: Deactivated successfully. Jul 15 23:19:21.565495 containerd[1497]: time="2025-07-15T23:19:21.565455036Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:19:21.566249 containerd[1497]: time="2025-07-15T23:19:21.566212675Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 15 23:19:21.567414 containerd[1497]: time="2025-07-15T23:19:21.567367041Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 23:19:21.568402 containerd[1497]: time="2025-07-15T23:19:21.568376226Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.126188187s" Jul 15 23:19:21.568469 containerd[1497]: time="2025-07-15T23:19:21.568408279Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 23:19:21.570912 containerd[1497]: time="2025-07-15T23:19:21.570876438Z" level=info msg="CreateContainer within sandbox \"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 23:19:21.577862 containerd[1497]: time="2025-07-15T23:19:21.577239597Z" level=info msg="Container c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:21.583651 containerd[1497]: time="2025-07-15T23:19:21.583614321Z" level=info msg="CreateContainer within sandbox \"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\"" Jul 15 23:19:21.584109 containerd[1497]: time="2025-07-15T23:19:21.584045503Z" level=info msg="StartContainer for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\"" Jul 15 23:19:21.584946 containerd[1497]: time="2025-07-15T23:19:21.584912788Z" level=info msg="connecting to shim c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511" address="unix:///run/containerd/s/d789d0153137f942dcd60a8e84165cbcf0630be8d8e4f0710f8ef9a0be0d2784" protocol=ttrpc version=3 Jul 15 23:19:21.600996 systemd[1]: Started cri-containerd-c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511.scope - libcontainer container c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511. Jul 15 23:19:21.623168 containerd[1497]: time="2025-07-15T23:19:21.623120394Z" level=info msg="StartContainer for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" returns successfully" Jul 15 23:19:21.662493 kubelet[2633]: E0715 23:19:21.662228 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:21.665602 containerd[1497]: time="2025-07-15T23:19:21.665378225Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:19:21.665677 kubelet[2633]: E0715 23:19:21.665509 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:21.712957 kubelet[2633]: I0715 23:19:21.711916 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-jhdwx" podStartSLOduration=1.050655363 podStartE2EDuration="6.71189737s" podCreationTimestamp="2025-07-15 23:19:15 +0000 UTC" firstStartedPulling="2025-07-15 23:19:15.907851881 +0000 UTC m=+7.384598342" lastFinishedPulling="2025-07-15 23:19:21.569093888 +0000 UTC m=+13.045840349" observedRunningTime="2025-07-15 23:19:21.710797427 +0000 UTC m=+13.187543888" watchObservedRunningTime="2025-07-15 23:19:21.71189737 +0000 UTC m=+13.188643831" Jul 15 23:19:21.713437 containerd[1497]: time="2025-07-15T23:19:21.713404045Z" level=info msg="Container d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:21.719579 containerd[1497]: time="2025-07-15T23:19:21.719530744Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\"" Jul 15 23:19:21.720767 containerd[1497]: time="2025-07-15T23:19:21.720720565Z" level=info msg="StartContainer for \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\"" Jul 15 23:19:21.721480 containerd[1497]: time="2025-07-15T23:19:21.721446671Z" level=info msg="connecting to shim d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe" address="unix:///run/containerd/s/ded914c9b9f790f55d6fdc37ccf399318c969e0fbf3c520e2f330396af0900cf" protocol=ttrpc version=3 Jul 15 23:19:21.749989 systemd[1]: Started cri-containerd-d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe.scope - libcontainer container d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe. Jul 15 23:19:21.798007 containerd[1497]: time="2025-07-15T23:19:21.797966407Z" level=info msg="StartContainer for \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" returns successfully" Jul 15 23:19:21.822974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 23:19:21.823182 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:19:21.823841 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:19:21.825985 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 23:19:21.841475 systemd[1]: cri-containerd-d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe.scope: Deactivated successfully. Jul 15 23:19:21.841749 systemd[1]: cri-containerd-d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe.scope: Consumed 23ms CPU time, 5.5M memory peak, 2.3M written to disk. Jul 15 23:19:21.855859 containerd[1497]: time="2025-07-15T23:19:21.853078410Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" id:\"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" pid:3153 exited_at:{seconds:1752621561 nanos:852016683}" Jul 15 23:19:21.861507 containerd[1497]: time="2025-07-15T23:19:21.861458458Z" level=info msg="received exit event container_id:\"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" id:\"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" pid:3153 exited_at:{seconds:1752621561 nanos:852016683}" Jul 15 23:19:21.880856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 23:19:22.669392 kubelet[2633]: E0715 23:19:22.669359 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:22.670121 kubelet[2633]: E0715 23:19:22.669998 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:22.671564 containerd[1497]: time="2025-07-15T23:19:22.671529265Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:19:22.683852 containerd[1497]: time="2025-07-15T23:19:22.683274680Z" level=info msg="Container 316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:22.686371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605980798.mount: Deactivated successfully. Jul 15 23:19:22.692029 containerd[1497]: time="2025-07-15T23:19:22.691935303Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\"" Jul 15 23:19:22.692512 containerd[1497]: time="2025-07-15T23:19:22.692486843Z" level=info msg="StartContainer for \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\"" Jul 15 23:19:22.693991 containerd[1497]: time="2025-07-15T23:19:22.693897407Z" level=info msg="connecting to shim 316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a" address="unix:///run/containerd/s/ded914c9b9f790f55d6fdc37ccf399318c969e0fbf3c520e2f330396af0900cf" protocol=ttrpc version=3 Jul 15 23:19:22.720082 systemd[1]: Started cri-containerd-316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a.scope - libcontainer container 316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a. Jul 15 23:19:22.753815 containerd[1497]: time="2025-07-15T23:19:22.753713880Z" level=info msg="StartContainer for \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" returns successfully" Jul 15 23:19:22.768666 systemd[1]: cri-containerd-316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a.scope: Deactivated successfully. Jul 15 23:19:22.774543 containerd[1497]: time="2025-07-15T23:19:22.774496749Z" level=info msg="received exit event container_id:\"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" id:\"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" pid:3199 exited_at:{seconds:1752621562 nanos:774262775}" Jul 15 23:19:22.774859 containerd[1497]: time="2025-07-15T23:19:22.774606633Z" level=info msg="TaskExit event in podsandbox handler container_id:\"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" id:\"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" pid:3199 exited_at:{seconds:1752621562 nanos:774262775}" Jul 15 23:19:22.792448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a-rootfs.mount: Deactivated successfully. Jul 15 23:19:23.473626 update_engine[1476]: I20250715 23:19:23.473555 1476 update_attempter.cc:509] Updating boot flags... Jul 15 23:19:23.674938 kubelet[2633]: E0715 23:19:23.674902 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:23.678479 containerd[1497]: time="2025-07-15T23:19:23.678382048Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:19:23.693689 containerd[1497]: time="2025-07-15T23:19:23.693637242Z" level=info msg="Container 2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:23.697243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810802848.mount: Deactivated successfully. Jul 15 23:19:23.703536 containerd[1497]: time="2025-07-15T23:19:23.703493386Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\"" Jul 15 23:19:23.704027 containerd[1497]: time="2025-07-15T23:19:23.704003740Z" level=info msg="StartContainer for \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\"" Jul 15 23:19:23.705083 containerd[1497]: time="2025-07-15T23:19:23.705059901Z" level=info msg="connecting to shim 2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2" address="unix:///run/containerd/s/ded914c9b9f790f55d6fdc37ccf399318c969e0fbf3c520e2f330396af0900cf" protocol=ttrpc version=3 Jul 15 23:19:23.721981 systemd[1]: Started cri-containerd-2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2.scope - libcontainer container 2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2. Jul 15 23:19:23.750712 systemd[1]: cri-containerd-2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2.scope: Deactivated successfully. Jul 15 23:19:23.752043 containerd[1497]: time="2025-07-15T23:19:23.752011697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" id:\"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" pid:3257 exited_at:{seconds:1752621563 nanos:751573130}" Jul 15 23:19:23.752321 containerd[1497]: time="2025-07-15T23:19:23.752290643Z" level=info msg="received exit event container_id:\"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" id:\"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" pid:3257 exited_at:{seconds:1752621563 nanos:751573130}" Jul 15 23:19:23.758682 containerd[1497]: time="2025-07-15T23:19:23.758647737Z" level=info msg="StartContainer for \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" returns successfully" Jul 15 23:19:23.777248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2-rootfs.mount: Deactivated successfully. Jul 15 23:19:24.686274 kubelet[2633]: E0715 23:19:24.686226 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:24.689113 containerd[1497]: time="2025-07-15T23:19:24.689067819Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:19:24.806866 containerd[1497]: time="2025-07-15T23:19:24.806294760Z" level=info msg="Container bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:24.809206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount590502283.mount: Deactivated successfully. Jul 15 23:19:24.827567 containerd[1497]: time="2025-07-15T23:19:24.827467088Z" level=info msg="CreateContainer within sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\"" Jul 15 23:19:24.828302 containerd[1497]: time="2025-07-15T23:19:24.828276980Z" level=info msg="StartContainer for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\"" Jul 15 23:19:24.829818 containerd[1497]: time="2025-07-15T23:19:24.829778803Z" level=info msg="connecting to shim bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a" address="unix:///run/containerd/s/ded914c9b9f790f55d6fdc37ccf399318c969e0fbf3c520e2f330396af0900cf" protocol=ttrpc version=3 Jul 15 23:19:24.852064 systemd[1]: Started cri-containerd-bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a.scope - libcontainer container bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a. Jul 15 23:19:24.905242 containerd[1497]: time="2025-07-15T23:19:24.905209608Z" level=info msg="StartContainer for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" returns successfully" Jul 15 23:19:25.011025 containerd[1497]: time="2025-07-15T23:19:25.010927976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" id:\"22fc0dfc25d2305685e4a66a96fa2eac858d59c745980726e75da11fe08c4e1e\" pid:3323 exited_at:{seconds:1752621565 nanos:10296999}" Jul 15 23:19:25.103406 kubelet[2633]: I0715 23:19:25.103374 2633 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 15 23:19:25.147962 kubelet[2633]: I0715 23:19:25.147865 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9v7b\" (UniqueName: \"kubernetes.io/projected/ed48b19e-a77d-44cc-a426-457729df25b7-kube-api-access-w9v7b\") pod \"coredns-668d6bf9bc-j77bx\" (UID: \"ed48b19e-a77d-44cc-a426-457729df25b7\") " pod="kube-system/coredns-668d6bf9bc-j77bx" Jul 15 23:19:25.147962 kubelet[2633]: I0715 23:19:25.147908 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjd6z\" (UniqueName: \"kubernetes.io/projected/9b1ecadd-e113-430d-9145-83014cdc1e79-kube-api-access-vjd6z\") pod \"coredns-668d6bf9bc-rxf92\" (UID: \"9b1ecadd-e113-430d-9145-83014cdc1e79\") " pod="kube-system/coredns-668d6bf9bc-rxf92" Jul 15 23:19:25.147962 kubelet[2633]: I0715 23:19:25.147927 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b1ecadd-e113-430d-9145-83014cdc1e79-config-volume\") pod \"coredns-668d6bf9bc-rxf92\" (UID: \"9b1ecadd-e113-430d-9145-83014cdc1e79\") " pod="kube-system/coredns-668d6bf9bc-rxf92" Jul 15 23:19:25.147962 kubelet[2633]: I0715 23:19:25.147954 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed48b19e-a77d-44cc-a426-457729df25b7-config-volume\") pod \"coredns-668d6bf9bc-j77bx\" (UID: \"ed48b19e-a77d-44cc-a426-457729df25b7\") " pod="kube-system/coredns-668d6bf9bc-j77bx" Jul 15 23:19:25.149958 systemd[1]: Created slice kubepods-burstable-pod9b1ecadd_e113_430d_9145_83014cdc1e79.slice - libcontainer container kubepods-burstable-pod9b1ecadd_e113_430d_9145_83014cdc1e79.slice. Jul 15 23:19:25.155959 systemd[1]: Created slice kubepods-burstable-poded48b19e_a77d_44cc_a426_457729df25b7.slice - libcontainer container kubepods-burstable-poded48b19e_a77d_44cc_a426_457729df25b7.slice. Jul 15 23:19:25.453913 kubelet[2633]: E0715 23:19:25.453876 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:25.454619 containerd[1497]: time="2025-07-15T23:19:25.454410435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rxf92,Uid:9b1ecadd-e113-430d-9145-83014cdc1e79,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:25.462193 kubelet[2633]: E0715 23:19:25.459907 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:25.462292 containerd[1497]: time="2025-07-15T23:19:25.460960406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j77bx,Uid:ed48b19e-a77d-44cc-a426-457729df25b7,Namespace:kube-system,Attempt:0,}" Jul 15 23:19:25.686338 kubelet[2633]: E0715 23:19:25.686309 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:25.701848 kubelet[2633]: I0715 23:19:25.701700 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qq7rc" podStartSLOduration=5.879152755 podStartE2EDuration="10.70168198s" podCreationTimestamp="2025-07-15 23:19:15 +0000 UTC" firstStartedPulling="2025-07-15 23:19:15.619480495 +0000 UTC m=+7.096226956" lastFinishedPulling="2025-07-15 23:19:20.44200972 +0000 UTC m=+11.918756181" observedRunningTime="2025-07-15 23:19:25.701299288 +0000 UTC m=+17.178045749" watchObservedRunningTime="2025-07-15 23:19:25.70168198 +0000 UTC m=+17.178428441" Jul 15 23:19:26.691684 kubelet[2633]: E0715 23:19:26.690855 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:27.123429 systemd-networkd[1425]: cilium_host: Link UP Jul 15 23:19:27.123624 systemd-networkd[1425]: cilium_net: Link UP Jul 15 23:19:27.123746 systemd-networkd[1425]: cilium_net: Gained carrier Jul 15 23:19:27.124050 systemd-networkd[1425]: cilium_host: Gained carrier Jul 15 23:19:27.171983 systemd-networkd[1425]: cilium_net: Gained IPv6LL Jul 15 23:19:27.221861 systemd-networkd[1425]: cilium_vxlan: Link UP Jul 15 23:19:27.221872 systemd-networkd[1425]: cilium_vxlan: Gained carrier Jul 15 23:19:27.533886 kernel: NET: Registered PF_ALG protocol family Jul 15 23:19:27.553023 systemd-networkd[1425]: cilium_host: Gained IPv6LL Jul 15 23:19:27.691563 kubelet[2633]: E0715 23:19:27.691521 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:28.114886 systemd-networkd[1425]: lxc_health: Link UP Jul 15 23:19:28.120410 systemd-networkd[1425]: lxc_health: Gained carrier Jul 15 23:19:28.549850 kernel: eth0: renamed from tmpdadd8 Jul 15 23:19:28.556925 kernel: eth0: renamed from tmp7768b Jul 15 23:19:28.557115 systemd-networkd[1425]: lxc6f21433bbad2: Link UP Jul 15 23:19:28.558249 systemd-networkd[1425]: lxc6f21433bbad2: Gained carrier Jul 15 23:19:28.558372 systemd-networkd[1425]: lxce57907ada460: Link UP Jul 15 23:19:28.558563 systemd-networkd[1425]: lxce57907ada460: Gained carrier Jul 15 23:19:28.986009 systemd-networkd[1425]: cilium_vxlan: Gained IPv6LL Jul 15 23:19:29.537544 kubelet[2633]: E0715 23:19:29.537489 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:29.816980 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jul 15 23:19:30.265015 systemd-networkd[1425]: lxce57907ada460: Gained IPv6LL Jul 15 23:19:30.392989 systemd-networkd[1425]: lxc6f21433bbad2: Gained IPv6LL Jul 15 23:19:32.134733 containerd[1497]: time="2025-07-15T23:19:32.134681177Z" level=info msg="connecting to shim 7768b4cb1cac9ea65fb539706ebf59e24a47daee94425402d153fe5bf40d0ca7" address="unix:///run/containerd/s/1de9eada1f3e151273c8f015aa78c5d81469cadeac5181f23a26706ad517ced4" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:32.156245 containerd[1497]: time="2025-07-15T23:19:32.156192917Z" level=info msg="connecting to shim dadd833482d87b9ee5d4dd2bdf0eddd43598817c4dbeea2b9ca0682d3b8ceb72" address="unix:///run/containerd/s/bca4a58cde3539e4115b2b8346dc1dc0c93da6310cf4bf0946a79aab6f4aed21" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:19:32.159009 systemd[1]: Started cri-containerd-7768b4cb1cac9ea65fb539706ebf59e24a47daee94425402d153fe5bf40d0ca7.scope - libcontainer container 7768b4cb1cac9ea65fb539706ebf59e24a47daee94425402d153fe5bf40d0ca7. Jul 15 23:19:32.176151 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:19:32.181033 systemd[1]: Started cri-containerd-dadd833482d87b9ee5d4dd2bdf0eddd43598817c4dbeea2b9ca0682d3b8ceb72.scope - libcontainer container dadd833482d87b9ee5d4dd2bdf0eddd43598817c4dbeea2b9ca0682d3b8ceb72. Jul 15 23:19:32.195503 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 23:19:32.198451 containerd[1497]: time="2025-07-15T23:19:32.198407796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j77bx,Uid:ed48b19e-a77d-44cc-a426-457729df25b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7768b4cb1cac9ea65fb539706ebf59e24a47daee94425402d153fe5bf40d0ca7\"" Jul 15 23:19:32.201559 kubelet[2633]: E0715 23:19:32.201522 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:32.209443 containerd[1497]: time="2025-07-15T23:19:32.209347192Z" level=info msg="CreateContainer within sandbox \"7768b4cb1cac9ea65fb539706ebf59e24a47daee94425402d153fe5bf40d0ca7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:19:32.219982 containerd[1497]: time="2025-07-15T23:19:32.219953625Z" level=info msg="Container 3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:32.220387 containerd[1497]: time="2025-07-15T23:19:32.220312954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rxf92,Uid:9b1ecadd-e113-430d-9145-83014cdc1e79,Namespace:kube-system,Attempt:0,} returns sandbox id \"dadd833482d87b9ee5d4dd2bdf0eddd43598817c4dbeea2b9ca0682d3b8ceb72\"" Jul 15 23:19:32.222222 kubelet[2633]: E0715 23:19:32.222191 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:32.224325 containerd[1497]: time="2025-07-15T23:19:32.224276418Z" level=info msg="CreateContainer within sandbox \"dadd833482d87b9ee5d4dd2bdf0eddd43598817c4dbeea2b9ca0682d3b8ceb72\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 23:19:32.225392 containerd[1497]: time="2025-07-15T23:19:32.225362087Z" level=info msg="CreateContainer within sandbox \"7768b4cb1cac9ea65fb539706ebf59e24a47daee94425402d153fe5bf40d0ca7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d\"" Jul 15 23:19:32.225787 containerd[1497]: time="2025-07-15T23:19:32.225733539Z" level=info msg="StartContainer for \"3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d\"" Jul 15 23:19:32.226806 containerd[1497]: time="2025-07-15T23:19:32.226783160Z" level=info msg="connecting to shim 3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d" address="unix:///run/containerd/s/1de9eada1f3e151273c8f015aa78c5d81469cadeac5181f23a26706ad517ced4" protocol=ttrpc version=3 Jul 15 23:19:32.232086 containerd[1497]: time="2025-07-15T23:19:32.232056389Z" level=info msg="Container e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:19:32.240296 containerd[1497]: time="2025-07-15T23:19:32.240267587Z" level=info msg="CreateContainer within sandbox \"dadd833482d87b9ee5d4dd2bdf0eddd43598817c4dbeea2b9ca0682d3b8ceb72\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2\"" Jul 15 23:19:32.241667 containerd[1497]: time="2025-07-15T23:19:32.240887861Z" level=info msg="StartContainer for \"e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2\"" Jul 15 23:19:32.241938 containerd[1497]: time="2025-07-15T23:19:32.241916156Z" level=info msg="connecting to shim e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2" address="unix:///run/containerd/s/bca4a58cde3539e4115b2b8346dc1dc0c93da6310cf4bf0946a79aab6f4aed21" protocol=ttrpc version=3 Jul 15 23:19:32.250975 systemd[1]: Started cri-containerd-3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d.scope - libcontainer container 3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d. Jul 15 23:19:32.259978 systemd[1]: Started cri-containerd-e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2.scope - libcontainer container e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2. Jul 15 23:19:32.282494 containerd[1497]: time="2025-07-15T23:19:32.281468375Z" level=info msg="StartContainer for \"3937aa780725483ab77b7bcc3efdbca2ec8521a5766f94942603c1b670e9d63d\" returns successfully" Jul 15 23:19:32.295845 containerd[1497]: time="2025-07-15T23:19:32.295799612Z" level=info msg="StartContainer for \"e60b9d606db76e46a6515705164cf313ccd66631615c35cf535b0a508920b1e2\" returns successfully" Jul 15 23:19:32.704793 kubelet[2633]: E0715 23:19:32.704730 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:32.708684 kubelet[2633]: E0715 23:19:32.708584 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:32.716856 kubelet[2633]: I0715 23:19:32.716802 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rxf92" podStartSLOduration=17.716785317 podStartE2EDuration="17.716785317s" podCreationTimestamp="2025-07-15 23:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:19:32.715606345 +0000 UTC m=+24.192352806" watchObservedRunningTime="2025-07-15 23:19:32.716785317 +0000 UTC m=+24.193531738" Jul 15 23:19:32.729741 kubelet[2633]: I0715 23:19:32.729345 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j77bx" podStartSLOduration=17.729326631 podStartE2EDuration="17.729326631s" podCreationTimestamp="2025-07-15 23:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:19:32.729259654 +0000 UTC m=+24.206006115" watchObservedRunningTime="2025-07-15 23:19:32.729326631 +0000 UTC m=+24.206073092" Jul 15 23:19:33.710197 kubelet[2633]: E0715 23:19:33.710168 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:33.710551 kubelet[2633]: E0715 23:19:33.710232 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:34.711794 kubelet[2633]: E0715 23:19:34.711752 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:34.712190 kubelet[2633]: E0715 23:19:34.712143 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:37.220946 kubelet[2633]: I0715 23:19:37.220901 2633 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 15 23:19:37.221792 kubelet[2633]: E0715 23:19:37.221315 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:37.538701 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:39744.service - OpenSSH per-connection server daemon (10.0.0.1:39744). Jul 15 23:19:37.584562 sshd[3975]: Accepted publickey for core from 10.0.0.1 port 39744 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:37.585966 sshd-session[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:37.591163 systemd-logind[1473]: New session 8 of user core. Jul 15 23:19:37.605021 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 23:19:37.717748 kubelet[2633]: E0715 23:19:37.717720 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:19:37.738864 sshd[3977]: Connection closed by 10.0.0.1 port 39744 Jul 15 23:19:37.739231 sshd-session[3975]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:37.741960 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:39744.service: Deactivated successfully. Jul 15 23:19:37.743754 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 23:19:37.745085 systemd-logind[1473]: Session 8 logged out. Waiting for processes to exit. Jul 15 23:19:37.746867 systemd-logind[1473]: Removed session 8. Jul 15 23:19:42.755669 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:42690.service - OpenSSH per-connection server daemon (10.0.0.1:42690). Jul 15 23:19:42.810290 sshd[3993]: Accepted publickey for core from 10.0.0.1 port 42690 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:42.811646 sshd-session[3993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:42.816728 systemd-logind[1473]: New session 9 of user core. Jul 15 23:19:42.835076 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 23:19:42.947502 sshd[3995]: Connection closed by 10.0.0.1 port 42690 Jul 15 23:19:42.947850 sshd-session[3993]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:42.951358 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:42690.service: Deactivated successfully. Jul 15 23:19:42.953455 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 23:19:42.954149 systemd-logind[1473]: Session 9 logged out. Waiting for processes to exit. Jul 15 23:19:42.955155 systemd-logind[1473]: Removed session 9. Jul 15 23:19:47.960218 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:42704.service - OpenSSH per-connection server daemon (10.0.0.1:42704). Jul 15 23:19:48.022631 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 42704 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:48.024032 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:48.028497 systemd-logind[1473]: New session 10 of user core. Jul 15 23:19:48.039006 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 23:19:48.170579 sshd[4014]: Connection closed by 10.0.0.1 port 42704 Jul 15 23:19:48.171092 sshd-session[4012]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:48.173728 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:42704.service: Deactivated successfully. Jul 15 23:19:48.175316 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 23:19:48.178094 systemd-logind[1473]: Session 10 logged out. Waiting for processes to exit. Jul 15 23:19:48.178977 systemd-logind[1473]: Removed session 10. Jul 15 23:19:53.190612 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:38292.service - OpenSSH per-connection server daemon (10.0.0.1:38292). Jul 15 23:19:53.237509 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 38292 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:53.238983 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:53.243655 systemd-logind[1473]: New session 11 of user core. Jul 15 23:19:53.257043 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 23:19:53.367818 sshd[4030]: Connection closed by 10.0.0.1 port 38292 Jul 15 23:19:53.368165 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:53.376947 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:38292.service: Deactivated successfully. Jul 15 23:19:53.379257 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 23:19:53.380133 systemd-logind[1473]: Session 11 logged out. Waiting for processes to exit. Jul 15 23:19:53.384071 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:38306.service - OpenSSH per-connection server daemon (10.0.0.1:38306). Jul 15 23:19:53.384686 systemd-logind[1473]: Removed session 11. Jul 15 23:19:53.430447 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 38306 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:53.431680 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:53.436599 systemd-logind[1473]: New session 12 of user core. Jul 15 23:19:53.449011 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 23:19:53.608853 sshd[4046]: Connection closed by 10.0.0.1 port 38306 Jul 15 23:19:53.610058 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:53.618262 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:38306.service: Deactivated successfully. Jul 15 23:19:53.619980 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 23:19:53.620851 systemd-logind[1473]: Session 12 logged out. Waiting for processes to exit. Jul 15 23:19:53.625339 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:38310.service - OpenSSH per-connection server daemon (10.0.0.1:38310). Jul 15 23:19:53.627582 systemd-logind[1473]: Removed session 12. Jul 15 23:19:53.685807 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 38310 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:53.687061 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:53.691014 systemd-logind[1473]: New session 13 of user core. Jul 15 23:19:53.700986 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 23:19:53.825783 sshd[4062]: Connection closed by 10.0.0.1 port 38310 Jul 15 23:19:53.826118 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:53.829847 systemd-logind[1473]: Session 13 logged out. Waiting for processes to exit. Jul 15 23:19:53.831365 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:38310.service: Deactivated successfully. Jul 15 23:19:53.834364 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 23:19:53.837178 systemd-logind[1473]: Removed session 13. Jul 15 23:19:58.839767 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:38316.service - OpenSSH per-connection server daemon (10.0.0.1:38316). Jul 15 23:19:58.893397 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 38316 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:19:58.894725 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:19:58.902526 systemd-logind[1473]: New session 14 of user core. Jul 15 23:19:58.927427 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 23:19:59.058012 sshd[4078]: Connection closed by 10.0.0.1 port 38316 Jul 15 23:19:59.058516 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jul 15 23:19:59.062718 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:38316.service: Deactivated successfully. Jul 15 23:19:59.067063 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 23:19:59.068156 systemd-logind[1473]: Session 14 logged out. Waiting for processes to exit. Jul 15 23:19:59.070192 systemd-logind[1473]: Removed session 14. Jul 15 23:20:04.080028 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:46896.service - OpenSSH per-connection server daemon (10.0.0.1:46896). Jul 15 23:20:04.124567 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 46896 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:04.125301 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:04.129430 systemd-logind[1473]: New session 15 of user core. Jul 15 23:20:04.138982 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 23:20:04.269908 sshd[4094]: Connection closed by 10.0.0.1 port 46896 Jul 15 23:20:04.268902 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:04.281213 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:46896.service: Deactivated successfully. Jul 15 23:20:04.283030 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 23:20:04.283909 systemd-logind[1473]: Session 15 logged out. Waiting for processes to exit. Jul 15 23:20:04.288084 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:46900.service - OpenSSH per-connection server daemon (10.0.0.1:46900). Jul 15 23:20:04.290143 systemd-logind[1473]: Removed session 15. Jul 15 23:20:04.334952 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 46900 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:04.337959 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:04.346425 systemd-logind[1473]: New session 16 of user core. Jul 15 23:20:04.354979 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 23:20:04.596349 sshd[4109]: Connection closed by 10.0.0.1 port 46900 Jul 15 23:20:04.596970 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:04.613983 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:46900.service: Deactivated successfully. Jul 15 23:20:04.615968 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 23:20:04.620051 systemd-logind[1473]: Session 16 logged out. Waiting for processes to exit. Jul 15 23:20:04.623301 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:46904.service - OpenSSH per-connection server daemon (10.0.0.1:46904). Jul 15 23:20:04.625175 systemd-logind[1473]: Removed session 16. Jul 15 23:20:04.670053 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 46904 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:04.671322 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:04.675496 systemd-logind[1473]: New session 17 of user core. Jul 15 23:20:04.682979 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 23:20:05.327091 sshd[4122]: Connection closed by 10.0.0.1 port 46904 Jul 15 23:20:05.328016 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:05.339703 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:46904.service: Deactivated successfully. Jul 15 23:20:05.344579 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 23:20:05.347116 systemd-logind[1473]: Session 17 logged out. Waiting for processes to exit. Jul 15 23:20:05.352887 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:46910.service - OpenSSH per-connection server daemon (10.0.0.1:46910). Jul 15 23:20:05.355014 systemd-logind[1473]: Removed session 17. Jul 15 23:20:05.402025 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 46910 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:05.403222 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:05.407009 systemd-logind[1473]: New session 18 of user core. Jul 15 23:20:05.417054 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 23:20:05.641776 sshd[4145]: Connection closed by 10.0.0.1 port 46910 Jul 15 23:20:05.643070 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:05.651637 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:46910.service: Deactivated successfully. Jul 15 23:20:05.653853 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 23:20:05.655145 systemd-logind[1473]: Session 18 logged out. Waiting for processes to exit. Jul 15 23:20:05.657981 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:46920.service - OpenSSH per-connection server daemon (10.0.0.1:46920). Jul 15 23:20:05.659716 systemd-logind[1473]: Removed session 18. Jul 15 23:20:05.722044 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 46920 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:05.723233 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:05.727755 systemd-logind[1473]: New session 19 of user core. Jul 15 23:20:05.741050 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 23:20:05.857852 sshd[4159]: Connection closed by 10.0.0.1 port 46920 Jul 15 23:20:05.858521 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:05.862626 systemd-logind[1473]: Session 19 logged out. Waiting for processes to exit. Jul 15 23:20:05.862823 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:46920.service: Deactivated successfully. Jul 15 23:20:05.865455 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 23:20:05.867674 systemd-logind[1473]: Removed session 19. Jul 15 23:20:10.870222 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:46934.service - OpenSSH per-connection server daemon (10.0.0.1:46934). Jul 15 23:20:10.908306 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 46934 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:10.909510 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:10.915257 systemd-logind[1473]: New session 20 of user core. Jul 15 23:20:10.924027 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 23:20:11.039638 sshd[4178]: Connection closed by 10.0.0.1 port 46934 Jul 15 23:20:11.040143 sshd-session[4176]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:11.043564 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:46934.service: Deactivated successfully. Jul 15 23:20:11.045218 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 23:20:11.047403 systemd-logind[1473]: Session 20 logged out. Waiting for processes to exit. Jul 15 23:20:11.048743 systemd-logind[1473]: Removed session 20. Jul 15 23:20:16.051928 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:54136.service - OpenSSH per-connection server daemon (10.0.0.1:54136). Jul 15 23:20:16.099891 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 54136 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:16.100952 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:16.104384 systemd-logind[1473]: New session 21 of user core. Jul 15 23:20:16.111043 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 23:20:16.220719 sshd[4195]: Connection closed by 10.0.0.1 port 54136 Jul 15 23:20:16.221014 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:16.224185 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:54136.service: Deactivated successfully. Jul 15 23:20:16.225923 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 23:20:16.228328 systemd-logind[1473]: Session 21 logged out. Waiting for processes to exit. Jul 15 23:20:16.229364 systemd-logind[1473]: Removed session 21. Jul 15 23:20:21.235113 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:54138.service - OpenSSH per-connection server daemon (10.0.0.1:54138). Jul 15 23:20:21.293507 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 54138 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:21.294623 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:21.298225 systemd-logind[1473]: New session 22 of user core. Jul 15 23:20:21.310984 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 23:20:21.415868 sshd[4210]: Connection closed by 10.0.0.1 port 54138 Jul 15 23:20:21.416179 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:21.425899 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:54138.service: Deactivated successfully. Jul 15 23:20:21.427968 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 23:20:21.429063 systemd-logind[1473]: Session 22 logged out. Waiting for processes to exit. Jul 15 23:20:21.431911 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:54154.service - OpenSSH per-connection server daemon (10.0.0.1:54154). Jul 15 23:20:21.433101 systemd-logind[1473]: Removed session 22. Jul 15 23:20:21.491442 sshd[4223]: Accepted publickey for core from 10.0.0.1 port 54154 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:21.492463 sshd-session[4223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:21.496253 systemd-logind[1473]: New session 23 of user core. Jul 15 23:20:21.509977 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 23:20:23.464350 containerd[1497]: time="2025-07-15T23:20:23.464305117Z" level=info msg="StopContainer for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" with timeout 30 (s)" Jul 15 23:20:23.465507 containerd[1497]: time="2025-07-15T23:20:23.465475164Z" level=info msg="Stop container \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" with signal terminated" Jul 15 23:20:23.476199 systemd[1]: cri-containerd-c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511.scope: Deactivated successfully. Jul 15 23:20:23.478178 containerd[1497]: time="2025-07-15T23:20:23.478140542Z" level=info msg="received exit event container_id:\"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" id:\"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" pid:3119 exited_at:{seconds:1752621623 nanos:477894837}" Jul 15 23:20:23.478427 containerd[1497]: time="2025-07-15T23:20:23.478396126Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" id:\"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" pid:3119 exited_at:{seconds:1752621623 nanos:477894837}" Jul 15 23:20:23.488767 containerd[1497]: time="2025-07-15T23:20:23.488707729Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 23:20:23.493193 containerd[1497]: time="2025-07-15T23:20:23.493153054Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" id:\"61c6032facd9c96bbbb15936e12b36d85a2c32ac64ed2310fa2fbae6780b444d\" pid:4254 exited_at:{seconds:1752621623 nanos:492884351}" Jul 15 23:20:23.499402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511-rootfs.mount: Deactivated successfully. Jul 15 23:20:23.501339 containerd[1497]: time="2025-07-15T23:20:23.501308750Z" level=info msg="StopContainer for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" with timeout 2 (s)" Jul 15 23:20:23.501734 containerd[1497]: time="2025-07-15T23:20:23.501715325Z" level=info msg="Stop container \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" with signal terminated" Jul 15 23:20:23.507982 systemd-networkd[1425]: lxc_health: Link DOWN Jul 15 23:20:23.507989 systemd-networkd[1425]: lxc_health: Lost carrier Jul 15 23:20:23.513209 containerd[1497]: time="2025-07-15T23:20:23.513174897Z" level=info msg="StopContainer for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" returns successfully" Jul 15 23:20:23.516044 containerd[1497]: time="2025-07-15T23:20:23.516008442Z" level=info msg="StopPodSandbox for \"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\"" Jul 15 23:20:23.516198 containerd[1497]: time="2025-07-15T23:20:23.516178151Z" level=info msg="Container to stop \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:20:23.526532 systemd[1]: cri-containerd-ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023.scope: Deactivated successfully. Jul 15 23:20:23.527395 containerd[1497]: time="2025-07-15T23:20:23.527124875Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" id:\"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" pid:2888 exit_status:137 exited_at:{seconds:1752621623 nanos:526754498}" Jul 15 23:20:23.528086 systemd[1]: cri-containerd-bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a.scope: Deactivated successfully. Jul 15 23:20:23.528775 systemd[1]: cri-containerd-bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a.scope: Consumed 6.403s CPU time, 121.6M memory peak, 144K read from disk, 12.9M written to disk. Jul 15 23:20:23.529109 containerd[1497]: time="2025-07-15T23:20:23.528989159Z" level=info msg="received exit event container_id:\"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" id:\"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" pid:3293 exited_at:{seconds:1752621623 nanos:528658780}" Jul 15 23:20:23.555191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a-rootfs.mount: Deactivated successfully. Jul 15 23:20:23.557645 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023-rootfs.mount: Deactivated successfully. Jul 15 23:20:23.570185 containerd[1497]: time="2025-07-15T23:20:23.569904631Z" level=info msg="shim disconnected" id=ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023 namespace=k8s.io Jul 15 23:20:23.581047 containerd[1497]: time="2025-07-15T23:20:23.570029863Z" level=warning msg="cleaning up after shim disconnected" id=ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023 namespace=k8s.io Jul 15 23:20:23.581047 containerd[1497]: time="2025-07-15T23:20:23.581040823Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:20:23.581180 containerd[1497]: time="2025-07-15T23:20:23.573147751Z" level=info msg="StopContainer for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" returns successfully" Jul 15 23:20:23.581678 containerd[1497]: time="2025-07-15T23:20:23.581629706Z" level=info msg="StopPodSandbox for \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\"" Jul 15 23:20:23.581733 containerd[1497]: time="2025-07-15T23:20:23.581715061Z" level=info msg="Container to stop \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:20:23.581762 containerd[1497]: time="2025-07-15T23:20:23.581734500Z" level=info msg="Container to stop \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:20:23.581762 containerd[1497]: time="2025-07-15T23:20:23.581743699Z" level=info msg="Container to stop \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:20:23.581762 containerd[1497]: time="2025-07-15T23:20:23.581752459Z" level=info msg="Container to stop \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:20:23.581762 containerd[1497]: time="2025-07-15T23:20:23.581760618Z" level=info msg="Container to stop \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 23:20:23.587257 systemd[1]: cri-containerd-9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad.scope: Deactivated successfully. Jul 15 23:20:23.596456 containerd[1497]: time="2025-07-15T23:20:23.596410713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" id:\"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" pid:3293 exited_at:{seconds:1752621623 nanos:528658780}" Jul 15 23:20:23.596456 containerd[1497]: time="2025-07-15T23:20:23.596433432Z" level=info msg="received exit event sandbox_id:\"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" exit_status:137 exited_at:{seconds:1752621623 nanos:526754498}" Jul 15 23:20:23.597249 containerd[1497]: time="2025-07-15T23:20:23.597207744Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" id:\"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" pid:2790 exit_status:137 exited_at:{seconds:1752621623 nanos:587351353}" Jul 15 23:20:23.597408 containerd[1497]: time="2025-07-15T23:20:23.597382493Z" level=info msg="TearDown network for sandbox \"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" successfully" Jul 15 23:20:23.597408 containerd[1497]: time="2025-07-15T23:20:23.597401972Z" level=info msg="StopPodSandbox for \"ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023\" returns successfully" Jul 15 23:20:23.598556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae560cfd9fb521ff1c67ff576b3ed1e26e629b6cff652d44af78ab76421fb023-shm.mount: Deactivated successfully. Jul 15 23:20:23.612816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad-rootfs.mount: Deactivated successfully. Jul 15 23:20:23.616637 containerd[1497]: time="2025-07-15T23:20:23.616603545Z" level=info msg="shim disconnected" id=9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad namespace=k8s.io Jul 15 23:20:23.616749 containerd[1497]: time="2025-07-15T23:20:23.616633343Z" level=warning msg="cleaning up after shim disconnected" id=9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad namespace=k8s.io Jul 15 23:20:23.616749 containerd[1497]: time="2025-07-15T23:20:23.616662462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 23:20:23.617040 containerd[1497]: time="2025-07-15T23:20:23.616968723Z" level=info msg="received exit event sandbox_id:\"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" exit_status:137 exited_at:{seconds:1752621623 nanos:587351353}" Jul 15 23:20:23.619916 kubelet[2633]: E0715 23:20:23.618899 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:23.621030 containerd[1497]: time="2025-07-15T23:20:23.619351575Z" level=info msg="TearDown network for sandbox \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" successfully" Jul 15 23:20:23.621030 containerd[1497]: time="2025-07-15T23:20:23.619375094Z" level=info msg="StopPodSandbox for \"9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad\" returns successfully" Jul 15 23:20:23.670380 kubelet[2633]: E0715 23:20:23.670345 2633 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:20:23.691698 kubelet[2633]: I0715 23:20:23.691660 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-bpf-maps\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.691864 kubelet[2633]: I0715 23:20:23.691722 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-xtables-lock\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.691864 kubelet[2633]: I0715 23:20:23.691751 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-hubble-tls\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.691864 kubelet[2633]: I0715 23:20:23.691773 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb94k\" (UniqueName: \"kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-kube-api-access-hb94k\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.691864 kubelet[2633]: I0715 23:20:23.691801 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-lib-modules\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.691864 kubelet[2633]: I0715 23:20:23.691821 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-kernel\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.691864 kubelet[2633]: I0715 23:20:23.691866 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-config-path\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692015 kubelet[2633]: I0715 23:20:23.691885 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/deafc9e4-ed7f-4899-9688-a72201e01351-clustermesh-secrets\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692015 kubelet[2633]: I0715 23:20:23.691900 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-hostproc\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692015 kubelet[2633]: I0715 23:20:23.691913 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cni-path\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692015 kubelet[2633]: I0715 23:20:23.691938 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/826b1ca1-c1e3-491e-88d6-f438d2a4965e-cilium-config-path\") pod \"826b1ca1-c1e3-491e-88d6-f438d2a4965e\" (UID: \"826b1ca1-c1e3-491e-88d6-f438d2a4965e\") " Jul 15 23:20:23.692015 kubelet[2633]: I0715 23:20:23.691954 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92dxs\" (UniqueName: \"kubernetes.io/projected/826b1ca1-c1e3-491e-88d6-f438d2a4965e-kube-api-access-92dxs\") pod \"826b1ca1-c1e3-491e-88d6-f438d2a4965e\" (UID: \"826b1ca1-c1e3-491e-88d6-f438d2a4965e\") " Jul 15 23:20:23.692015 kubelet[2633]: I0715 23:20:23.691970 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-etc-cni-netd\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692132 kubelet[2633]: I0715 23:20:23.691985 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-cgroup\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692132 kubelet[2633]: I0715 23:20:23.692008 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-net\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.692132 kubelet[2633]: I0715 23:20:23.692025 2633 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-run\") pod \"deafc9e4-ed7f-4899-9688-a72201e01351\" (UID: \"deafc9e4-ed7f-4899-9688-a72201e01351\") " Jul 15 23:20:23.694479 kubelet[2633]: I0715 23:20:23.694440 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.694479 kubelet[2633]: I0715 23:20:23.694468 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-hostproc" (OuterVolumeSpecName: "hostproc") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.694532 kubelet[2633]: I0715 23:20:23.694446 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.694532 kubelet[2633]: I0715 23:20:23.694491 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cni-path" (OuterVolumeSpecName: "cni-path") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.696490 kubelet[2633]: I0715 23:20:23.696442 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.696548 kubelet[2633]: I0715 23:20:23.696491 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.696548 kubelet[2633]: I0715 23:20:23.696508 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.696548 kubelet[2633]: I0715 23:20:23.696522 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.700737 kubelet[2633]: I0715 23:20:23.700694 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/826b1ca1-c1e3-491e-88d6-f438d2a4965e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "826b1ca1-c1e3-491e-88d6-f438d2a4965e" (UID: "826b1ca1-c1e3-491e-88d6-f438d2a4965e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:20:23.700770 kubelet[2633]: I0715 23:20:23.700758 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.700793 kubelet[2633]: I0715 23:20:23.700775 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 15 23:20:23.702469 kubelet[2633]: I0715 23:20:23.702376 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 15 23:20:23.705031 kubelet[2633]: I0715 23:20:23.704997 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deafc9e4-ed7f-4899-9688-a72201e01351-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 15 23:20:23.705571 kubelet[2633]: I0715 23:20:23.705503 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-kube-api-access-hb94k" (OuterVolumeSpecName: "kube-api-access-hb94k") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "kube-api-access-hb94k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:20:23.705571 kubelet[2633]: I0715 23:20:23.705538 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "deafc9e4-ed7f-4899-9688-a72201e01351" (UID: "deafc9e4-ed7f-4899-9688-a72201e01351"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:20:23.705822 kubelet[2633]: I0715 23:20:23.705801 2633 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/826b1ca1-c1e3-491e-88d6-f438d2a4965e-kube-api-access-92dxs" (OuterVolumeSpecName: "kube-api-access-92dxs") pod "826b1ca1-c1e3-491e-88d6-f438d2a4965e" (UID: "826b1ca1-c1e3-491e-88d6-f438d2a4965e"). InnerVolumeSpecName "kube-api-access-92dxs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793169 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793199 2633 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793207 2633 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793217 2633 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/deafc9e4-ed7f-4899-9688-a72201e01351-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793254 2633 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793265 2633 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793273 2633 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-92dxs\" (UniqueName: \"kubernetes.io/projected/826b1ca1-c1e3-491e-88d6-f438d2a4965e-kube-api-access-92dxs\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793437 kubelet[2633]: I0715 23:20:23.793282 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/826b1ca1-c1e3-491e-88d6-f438d2a4965e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793289 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793297 2633 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793304 2633 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793312 2633 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793319 2633 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793337 2633 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793344 2633 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/deafc9e4-ed7f-4899-9688-a72201e01351-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.793676 kubelet[2633]: I0715 23:20:23.793351 2633 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hb94k\" (UniqueName: \"kubernetes.io/projected/deafc9e4-ed7f-4899-9688-a72201e01351-kube-api-access-hb94k\") on node \"localhost\" DevicePath \"\"" Jul 15 23:20:23.822593 kubelet[2633]: I0715 23:20:23.822563 2633 scope.go:117] "RemoveContainer" containerID="bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a" Jul 15 23:20:23.828999 systemd[1]: Removed slice kubepods-burstable-poddeafc9e4_ed7f_4899_9688_a72201e01351.slice - libcontainer container kubepods-burstable-poddeafc9e4_ed7f_4899_9688_a72201e01351.slice. Jul 15 23:20:23.829101 systemd[1]: kubepods-burstable-poddeafc9e4_ed7f_4899_9688_a72201e01351.slice: Consumed 6.564s CPU time, 121.9M memory peak, 144K read from disk, 15.2M written to disk. Jul 15 23:20:23.833337 systemd[1]: Removed slice kubepods-besteffort-pod826b1ca1_c1e3_491e_88d6_f438d2a4965e.slice - libcontainer container kubepods-besteffort-pod826b1ca1_c1e3_491e_88d6_f438d2a4965e.slice. Jul 15 23:20:23.839674 containerd[1497]: time="2025-07-15T23:20:23.839620604Z" level=info msg="RemoveContainer for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\"" Jul 15 23:20:23.865687 containerd[1497]: time="2025-07-15T23:20:23.865630116Z" level=info msg="RemoveContainer for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" returns successfully" Jul 15 23:20:23.866187 kubelet[2633]: I0715 23:20:23.866088 2633 scope.go:117] "RemoveContainer" containerID="2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2" Jul 15 23:20:23.867629 containerd[1497]: time="2025-07-15T23:20:23.867594035Z" level=info msg="RemoveContainer for \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\"" Jul 15 23:20:23.873957 containerd[1497]: time="2025-07-15T23:20:23.873922684Z" level=info msg="RemoveContainer for \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" returns successfully" Jul 15 23:20:23.874174 kubelet[2633]: I0715 23:20:23.874155 2633 scope.go:117] "RemoveContainer" containerID="316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a" Jul 15 23:20:23.878853 containerd[1497]: time="2025-07-15T23:20:23.877983033Z" level=info msg="RemoveContainer for \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\"" Jul 15 23:20:23.881105 containerd[1497]: time="2025-07-15T23:20:23.881069962Z" level=info msg="RemoveContainer for \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" returns successfully" Jul 15 23:20:23.881586 kubelet[2633]: I0715 23:20:23.881496 2633 scope.go:117] "RemoveContainer" containerID="d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe" Jul 15 23:20:23.883190 containerd[1497]: time="2025-07-15T23:20:23.883164353Z" level=info msg="RemoveContainer for \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\"" Jul 15 23:20:23.885651 containerd[1497]: time="2025-07-15T23:20:23.885617761Z" level=info msg="RemoveContainer for \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" returns successfully" Jul 15 23:20:23.885872 kubelet[2633]: I0715 23:20:23.885842 2633 scope.go:117] "RemoveContainer" containerID="d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786" Jul 15 23:20:23.887464 containerd[1497]: time="2025-07-15T23:20:23.887375253Z" level=info msg="RemoveContainer for \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\"" Jul 15 23:20:23.890281 containerd[1497]: time="2025-07-15T23:20:23.890244315Z" level=info msg="RemoveContainer for \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" returns successfully" Jul 15 23:20:23.890551 kubelet[2633]: I0715 23:20:23.890536 2633 scope.go:117] "RemoveContainer" containerID="bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a" Jul 15 23:20:23.890781 containerd[1497]: time="2025-07-15T23:20:23.890744204Z" level=error msg="ContainerStatus for \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\": not found" Jul 15 23:20:23.890941 kubelet[2633]: E0715 23:20:23.890917 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\": not found" containerID="bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a" Jul 15 23:20:23.895216 kubelet[2633]: I0715 23:20:23.895098 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a"} err="failed to get container status \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfa240cbe7a7e0e80fdecff78b8e74ff29ea0db14985de9eeaecf9d96f03d97a\": not found" Jul 15 23:20:23.895261 kubelet[2633]: I0715 23:20:23.895220 2633 scope.go:117] "RemoveContainer" containerID="2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2" Jul 15 23:20:23.895467 containerd[1497]: time="2025-07-15T23:20:23.895431315Z" level=error msg="ContainerStatus for \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\": not found" Jul 15 23:20:23.895584 kubelet[2633]: E0715 23:20:23.895555 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\": not found" containerID="2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2" Jul 15 23:20:23.895621 kubelet[2633]: I0715 23:20:23.895582 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2"} err="failed to get container status \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"2283d9f33fc82c56241c0c47386c3011054423b6b0d3483f2c71ca1693c1c2c2\": not found" Jul 15 23:20:23.895621 kubelet[2633]: I0715 23:20:23.895601 2633 scope.go:117] "RemoveContainer" containerID="316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a" Jul 15 23:20:23.895791 containerd[1497]: time="2025-07-15T23:20:23.895759095Z" level=error msg="ContainerStatus for \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\": not found" Jul 15 23:20:23.895925 kubelet[2633]: E0715 23:20:23.895906 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\": not found" containerID="316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a" Jul 15 23:20:23.895964 kubelet[2633]: I0715 23:20:23.895930 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a"} err="failed to get container status \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\": rpc error: code = NotFound desc = an error occurred when try to find container \"316352e4f2fa772192cbcfe90e60eceb69f61890a95961d856d302243f04e20a\": not found" Jul 15 23:20:23.895964 kubelet[2633]: I0715 23:20:23.895943 2633 scope.go:117] "RemoveContainer" containerID="d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe" Jul 15 23:20:23.896092 containerd[1497]: time="2025-07-15T23:20:23.896066116Z" level=error msg="ContainerStatus for \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\": not found" Jul 15 23:20:23.896211 kubelet[2633]: E0715 23:20:23.896190 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\": not found" containerID="d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe" Jul 15 23:20:23.896281 kubelet[2633]: I0715 23:20:23.896262 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe"} err="failed to get container status \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5ae7959a99cfb0f18854ebbaf6682d1febe73888265e7a1ae0112c68d5d79fe\": not found" Jul 15 23:20:23.896340 kubelet[2633]: I0715 23:20:23.896328 2633 scope.go:117] "RemoveContainer" containerID="d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786" Jul 15 23:20:23.896589 containerd[1497]: time="2025-07-15T23:20:23.896525287Z" level=error msg="ContainerStatus for \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\": not found" Jul 15 23:20:23.896672 kubelet[2633]: E0715 23:20:23.896653 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\": not found" containerID="d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786" Jul 15 23:20:23.896706 kubelet[2633]: I0715 23:20:23.896675 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786"} err="failed to get container status \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4cdb8bab79287cb8e15a47cb5ad18bd3f5fce7e05f6afb43a8a393de87c6786\": not found" Jul 15 23:20:23.896706 kubelet[2633]: I0715 23:20:23.896690 2633 scope.go:117] "RemoveContainer" containerID="c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511" Jul 15 23:20:23.898035 containerd[1497]: time="2025-07-15T23:20:23.898008316Z" level=info msg="RemoveContainer for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\"" Jul 15 23:20:23.900399 containerd[1497]: time="2025-07-15T23:20:23.900364930Z" level=info msg="RemoveContainer for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" returns successfully" Jul 15 23:20:23.900553 kubelet[2633]: I0715 23:20:23.900524 2633 scope.go:117] "RemoveContainer" containerID="c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511" Jul 15 23:20:23.900790 containerd[1497]: time="2025-07-15T23:20:23.900755786Z" level=error msg="ContainerStatus for \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\": not found" Jul 15 23:20:23.900962 kubelet[2633]: E0715 23:20:23.900939 2633 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\": not found" containerID="c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511" Jul 15 23:20:23.901010 kubelet[2633]: I0715 23:20:23.900967 2633 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511"} err="failed to get container status \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9d7da8817f86528822523f1eaad99bf0bb6d8f0b6cadb5a6428d2259553d511\": not found" Jul 15 23:20:24.499436 systemd[1]: var-lib-kubelet-pods-826b1ca1\x2dc1e3\x2d491e\x2d88d6\x2df438d2a4965e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d92dxs.mount: Deactivated successfully. Jul 15 23:20:24.499535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9454e8c67e3cc660efc7774b07628dcf59596d4eafa498bd37674372fed7e3ad-shm.mount: Deactivated successfully. Jul 15 23:20:24.499588 systemd[1]: var-lib-kubelet-pods-deafc9e4\x2ded7f\x2d4899\x2d9688\x2da72201e01351-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhb94k.mount: Deactivated successfully. Jul 15 23:20:24.499638 systemd[1]: var-lib-kubelet-pods-deafc9e4\x2ded7f\x2d4899\x2d9688\x2da72201e01351-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 23:20:24.499682 systemd[1]: var-lib-kubelet-pods-deafc9e4\x2ded7f\x2d4899\x2d9688\x2da72201e01351-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 23:20:24.620920 kubelet[2633]: I0715 23:20:24.620575 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="826b1ca1-c1e3-491e-88d6-f438d2a4965e" path="/var/lib/kubelet/pods/826b1ca1-c1e3-491e-88d6-f438d2a4965e/volumes" Jul 15 23:20:24.621915 kubelet[2633]: I0715 23:20:24.621365 2633 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deafc9e4-ed7f-4899-9688-a72201e01351" path="/var/lib/kubelet/pods/deafc9e4-ed7f-4899-9688-a72201e01351/volumes" Jul 15 23:20:25.425190 sshd[4225]: Connection closed by 10.0.0.1 port 54154 Jul 15 23:20:25.425899 sshd-session[4223]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:25.435822 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:54154.service: Deactivated successfully. Jul 15 23:20:25.437443 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 23:20:25.437707 systemd[1]: session-23.scope: Consumed 1.285s CPU time, 24M memory peak. Jul 15 23:20:25.438218 systemd-logind[1473]: Session 23 logged out. Waiting for processes to exit. Jul 15 23:20:25.441167 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:43484.service - OpenSSH per-connection server daemon (10.0.0.1:43484). Jul 15 23:20:25.441627 systemd-logind[1473]: Removed session 23. Jul 15 23:20:25.503247 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 43484 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:25.504521 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:25.509133 systemd-logind[1473]: New session 24 of user core. Jul 15 23:20:25.530027 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 23:20:27.081978 sshd[4378]: Connection closed by 10.0.0.1 port 43484 Jul 15 23:20:27.083077 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:27.096029 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:43484.service: Deactivated successfully. Jul 15 23:20:27.099042 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 23:20:27.100028 systemd[1]: session-24.scope: Consumed 1.474s CPU time, 26M memory peak. Jul 15 23:20:27.100759 systemd-logind[1473]: Session 24 logged out. Waiting for processes to exit. Jul 15 23:20:27.105078 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:43488.service - OpenSSH per-connection server daemon (10.0.0.1:43488). Jul 15 23:20:27.106885 systemd-logind[1473]: Removed session 24. Jul 15 23:20:27.147700 kubelet[2633]: I0715 23:20:27.147654 2633 memory_manager.go:355] "RemoveStaleState removing state" podUID="deafc9e4-ed7f-4899-9688-a72201e01351" containerName="cilium-agent" Jul 15 23:20:27.147700 kubelet[2633]: I0715 23:20:27.147682 2633 memory_manager.go:355] "RemoveStaleState removing state" podUID="826b1ca1-c1e3-491e-88d6-f438d2a4965e" containerName="cilium-operator" Jul 15 23:20:27.156144 sshd[4390]: Accepted publickey for core from 10.0.0.1 port 43488 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:27.158766 systemd[1]: Created slice kubepods-burstable-pod579d0e76_9c90_49ef_97c5_26445c90dd0c.slice - libcontainer container kubepods-burstable-pod579d0e76_9c90_49ef_97c5_26445c90dd0c.slice. Jul 15 23:20:27.161108 sshd-session[4390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:27.171000 systemd-logind[1473]: New session 25 of user core. Jul 15 23:20:27.174066 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 23:20:27.211732 kubelet[2633]: I0715 23:20:27.211687 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-cilium-cgroup\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.211732 kubelet[2633]: I0715 23:20:27.211730 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/579d0e76-9c90-49ef-97c5-26445c90dd0c-hubble-tls\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.211913 kubelet[2633]: I0715 23:20:27.211752 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-hostproc\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.211913 kubelet[2633]: I0715 23:20:27.211766 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-etc-cni-netd\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.211913 kubelet[2633]: I0715 23:20:27.211785 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-host-proc-sys-net\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.211913 kubelet[2633]: I0715 23:20:27.211863 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-cilium-run\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212004 kubelet[2633]: I0715 23:20:27.211914 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-xtables-lock\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212004 kubelet[2633]: I0715 23:20:27.211940 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/579d0e76-9c90-49ef-97c5-26445c90dd0c-cilium-config-path\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212004 kubelet[2633]: I0715 23:20:27.211957 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/579d0e76-9c90-49ef-97c5-26445c90dd0c-cilium-ipsec-secrets\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212004 kubelet[2633]: I0715 23:20:27.211978 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d495k\" (UniqueName: \"kubernetes.io/projected/579d0e76-9c90-49ef-97c5-26445c90dd0c-kube-api-access-d495k\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212004 kubelet[2633]: I0715 23:20:27.211998 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-host-proc-sys-kernel\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212099 kubelet[2633]: I0715 23:20:27.212033 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-bpf-maps\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212099 kubelet[2633]: I0715 23:20:27.212066 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/579d0e76-9c90-49ef-97c5-26445c90dd0c-clustermesh-secrets\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212139 kubelet[2633]: I0715 23:20:27.212104 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-cni-path\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.212139 kubelet[2633]: I0715 23:20:27.212120 2633 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/579d0e76-9c90-49ef-97c5-26445c90dd0c-lib-modules\") pod \"cilium-9mz7r\" (UID: \"579d0e76-9c90-49ef-97c5-26445c90dd0c\") " pod="kube-system/cilium-9mz7r" Jul 15 23:20:27.225929 sshd[4392]: Connection closed by 10.0.0.1 port 43488 Jul 15 23:20:27.227143 sshd-session[4390]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:27.240353 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:43488.service: Deactivated successfully. Jul 15 23:20:27.242110 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 23:20:27.242858 systemd-logind[1473]: Session 25 logged out. Waiting for processes to exit. Jul 15 23:20:27.247577 systemd[1]: Started sshd@25-10.0.0.74:22-10.0.0.1:43492.service - OpenSSH per-connection server daemon (10.0.0.1:43492). Jul 15 23:20:27.248253 systemd-logind[1473]: Removed session 25. Jul 15 23:20:27.297813 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 43492 ssh2: RSA SHA256:WKzD1w5xALFuZEbHA74yUDpJiUV5Q0YeQNUQBHTTLNg Jul 15 23:20:27.298933 sshd-session[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 23:20:27.303524 systemd-logind[1473]: New session 26 of user core. Jul 15 23:20:27.310979 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 23:20:27.464647 kubelet[2633]: E0715 23:20:27.464581 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:27.466136 containerd[1497]: time="2025-07-15T23:20:27.466089512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mz7r,Uid:579d0e76-9c90-49ef-97c5-26445c90dd0c,Namespace:kube-system,Attempt:0,}" Jul 15 23:20:27.478466 containerd[1497]: time="2025-07-15T23:20:27.478366534Z" level=info msg="connecting to shim 772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b" address="unix:///run/containerd/s/6d2998ee888ad7e3641881d37b16c9dada8e1c9a520036e868cc09d91e3891a1" namespace=k8s.io protocol=ttrpc version=3 Jul 15 23:20:27.506019 systemd[1]: Started cri-containerd-772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b.scope - libcontainer container 772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b. Jul 15 23:20:27.530454 containerd[1497]: time="2025-07-15T23:20:27.530402082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9mz7r,Uid:579d0e76-9c90-49ef-97c5-26445c90dd0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\"" Jul 15 23:20:27.531159 kubelet[2633]: E0715 23:20:27.531119 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:27.533493 containerd[1497]: time="2025-07-15T23:20:27.533382781Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 23:20:27.539749 containerd[1497]: time="2025-07-15T23:20:27.539707323Z" level=info msg="Container 83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:20:27.546937 containerd[1497]: time="2025-07-15T23:20:27.546819828Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\"" Jul 15 23:20:27.549165 containerd[1497]: time="2025-07-15T23:20:27.549082641Z" level=info msg="StartContainer for \"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\"" Jul 15 23:20:27.550264 containerd[1497]: time="2025-07-15T23:20:27.550238227Z" level=info msg="connecting to shim 83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df" address="unix:///run/containerd/s/6d2998ee888ad7e3641881d37b16c9dada8e1c9a520036e868cc09d91e3891a1" protocol=ttrpc version=3 Jul 15 23:20:27.571055 systemd[1]: Started cri-containerd-83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df.scope - libcontainer container 83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df. Jul 15 23:20:27.601666 containerd[1497]: time="2025-07-15T23:20:27.598195767Z" level=info msg="StartContainer for \"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\" returns successfully" Jul 15 23:20:27.610188 systemd[1]: cri-containerd-83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df.scope: Deactivated successfully. Jul 15 23:20:27.613590 containerd[1497]: time="2025-07-15T23:20:27.613528965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\" id:\"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\" pid:4471 exited_at:{seconds:1752621627 nanos:613041388}" Jul 15 23:20:27.614068 containerd[1497]: time="2025-07-15T23:20:27.614002302Z" level=info msg="received exit event container_id:\"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\" id:\"83ca5f960cacb2ba025ba9f66c88d969c4c4e64767795d8630ddfc4a744db6df\" pid:4471 exited_at:{seconds:1752621627 nanos:613041388}" Jul 15 23:20:27.835059 kubelet[2633]: E0715 23:20:27.834788 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:27.838900 containerd[1497]: time="2025-07-15T23:20:27.838858907Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 23:20:27.845288 containerd[1497]: time="2025-07-15T23:20:27.845253205Z" level=info msg="Container b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:20:27.851499 containerd[1497]: time="2025-07-15T23:20:27.851450513Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\"" Jul 15 23:20:27.852061 containerd[1497]: time="2025-07-15T23:20:27.851948090Z" level=info msg="StartContainer for \"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\"" Jul 15 23:20:27.853039 containerd[1497]: time="2025-07-15T23:20:27.853000280Z" level=info msg="connecting to shim b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b" address="unix:///run/containerd/s/6d2998ee888ad7e3641881d37b16c9dada8e1c9a520036e868cc09d91e3891a1" protocol=ttrpc version=3 Jul 15 23:20:27.875018 systemd[1]: Started cri-containerd-b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b.scope - libcontainer container b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b. Jul 15 23:20:27.906075 containerd[1497]: time="2025-07-15T23:20:27.906018742Z" level=info msg="StartContainer for \"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\" returns successfully" Jul 15 23:20:27.913097 systemd[1]: cri-containerd-b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b.scope: Deactivated successfully. Jul 15 23:20:27.914864 containerd[1497]: time="2025-07-15T23:20:27.914041524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\" id:\"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\" pid:4516 exited_at:{seconds:1752621627 nanos:913302679}" Jul 15 23:20:27.915129 containerd[1497]: time="2025-07-15T23:20:27.915099234Z" level=info msg="received exit event container_id:\"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\" id:\"b0f9c667ad61d7793aed1da868d7d3abc353ceaf460f5dce954da294f332ac6b\" pid:4516 exited_at:{seconds:1752621627 nanos:913302679}" Jul 15 23:20:28.672265 kubelet[2633]: E0715 23:20:28.672203 2633 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 23:20:28.839048 kubelet[2633]: E0715 23:20:28.839000 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:28.842083 containerd[1497]: time="2025-07-15T23:20:28.842031201Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 23:20:28.857222 containerd[1497]: time="2025-07-15T23:20:28.856752078Z" level=info msg="Container 7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:20:28.863590 containerd[1497]: time="2025-07-15T23:20:28.863536861Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\"" Jul 15 23:20:28.864845 containerd[1497]: time="2025-07-15T23:20:28.864667611Z" level=info msg="StartContainer for \"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\"" Jul 15 23:20:28.866288 containerd[1497]: time="2025-07-15T23:20:28.866249102Z" level=info msg="connecting to shim 7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37" address="unix:///run/containerd/s/6d2998ee888ad7e3641881d37b16c9dada8e1c9a520036e868cc09d91e3891a1" protocol=ttrpc version=3 Jul 15 23:20:28.890060 systemd[1]: Started cri-containerd-7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37.scope - libcontainer container 7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37. Jul 15 23:20:28.927246 containerd[1497]: time="2025-07-15T23:20:28.927052803Z" level=info msg="StartContainer for \"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\" returns successfully" Jul 15 23:20:28.927877 systemd[1]: cri-containerd-7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37.scope: Deactivated successfully. Jul 15 23:20:28.929898 containerd[1497]: time="2025-07-15T23:20:28.929554774Z" level=info msg="received exit event container_id:\"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\" id:\"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\" pid:4561 exited_at:{seconds:1752621628 nanos:928824885}" Jul 15 23:20:28.933703 containerd[1497]: time="2025-07-15T23:20:28.933657954Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\" id:\"7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37\" pid:4561 exited_at:{seconds:1752621628 nanos:928824885}" Jul 15 23:20:29.318174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cf4a85cdd35ffdfa5230a543af7bd14b2c07ba10a7a87912c29b88ff8570f37-rootfs.mount: Deactivated successfully. Jul 15 23:20:29.842885 kubelet[2633]: I0715 23:20:29.842806 2633 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T23:20:29Z","lastTransitionTime":"2025-07-15T23:20:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 23:20:29.851815 kubelet[2633]: E0715 23:20:29.851779 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:29.856261 containerd[1497]: time="2025-07-15T23:20:29.856206503Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 23:20:29.875806 containerd[1497]: time="2025-07-15T23:20:29.875127498Z" level=info msg="Container 0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:20:29.883296 containerd[1497]: time="2025-07-15T23:20:29.883249889Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\"" Jul 15 23:20:29.883735 containerd[1497]: time="2025-07-15T23:20:29.883706871Z" level=info msg="StartContainer for \"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\"" Jul 15 23:20:29.884677 containerd[1497]: time="2025-07-15T23:20:29.884645473Z" level=info msg="connecting to shim 0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72" address="unix:///run/containerd/s/6d2998ee888ad7e3641881d37b16c9dada8e1c9a520036e868cc09d91e3891a1" protocol=ttrpc version=3 Jul 15 23:20:29.911023 systemd[1]: Started cri-containerd-0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72.scope - libcontainer container 0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72. Jul 15 23:20:29.940291 systemd[1]: cri-containerd-0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72.scope: Deactivated successfully. Jul 15 23:20:29.942282 containerd[1497]: time="2025-07-15T23:20:29.942230383Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\" id:\"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\" pid:4600 exited_at:{seconds:1752621629 nanos:941982593}" Jul 15 23:20:29.944583 containerd[1497]: time="2025-07-15T23:20:29.944207703Z" level=info msg="received exit event container_id:\"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\" id:\"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\" pid:4600 exited_at:{seconds:1752621629 nanos:941982593}" Jul 15 23:20:29.945313 containerd[1497]: time="2025-07-15T23:20:29.945278060Z" level=info msg="StartContainer for \"0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72\" returns successfully" Jul 15 23:20:29.962930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0299126f30f71ae670ec656ad3115f61061fdcce15ecf890da475cfedb2dae72-rootfs.mount: Deactivated successfully. Jul 15 23:20:30.857430 kubelet[2633]: E0715 23:20:30.857323 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:30.860887 containerd[1497]: time="2025-07-15T23:20:30.860853406Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 23:20:30.879527 containerd[1497]: time="2025-07-15T23:20:30.879487071Z" level=info msg="Container 567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3: CDI devices from CRI Config.CDIDevices: []" Jul 15 23:20:30.881907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333853404.mount: Deactivated successfully. Jul 15 23:20:30.887018 containerd[1497]: time="2025-07-15T23:20:30.886975552Z" level=info msg="CreateContainer within sandbox \"772cf3680aecd3962580171b9e7c5f68994bebb4b368477de022e668b944086b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\"" Jul 15 23:20:30.887778 containerd[1497]: time="2025-07-15T23:20:30.887721204Z" level=info msg="StartContainer for \"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\"" Jul 15 23:20:30.888841 containerd[1497]: time="2025-07-15T23:20:30.888800084Z" level=info msg="connecting to shim 567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3" address="unix:///run/containerd/s/6d2998ee888ad7e3641881d37b16c9dada8e1c9a520036e868cc09d91e3891a1" protocol=ttrpc version=3 Jul 15 23:20:30.908057 systemd[1]: Started cri-containerd-567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3.scope - libcontainer container 567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3. Jul 15 23:20:30.937050 containerd[1497]: time="2025-07-15T23:20:30.936955889Z" level=info msg="StartContainer for \"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\" returns successfully" Jul 15 23:20:30.983566 containerd[1497]: time="2025-07-15T23:20:30.983531993Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\" id:\"6a609aca90b1042f69c0f5c9c317dd7f0c509c25354c8d83a7f7dda57f9c3d40\" pid:4666 exited_at:{seconds:1752621630 nanos:983263323}" Jul 15 23:20:31.200865 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 15 23:20:31.863028 kubelet[2633]: E0715 23:20:31.863000 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:33.466459 kubelet[2633]: E0715 23:20:33.466419 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:33.647159 containerd[1497]: time="2025-07-15T23:20:33.647111070Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\" id:\"25752d92d71689b7ab59389908ca12a0d53953f0fc9311725eddb34d99ee3fd0\" pid:5066 exit_status:1 exited_at:{seconds:1752621633 nanos:646734081}" Jul 15 23:20:34.085076 systemd-networkd[1425]: lxc_health: Link UP Jul 15 23:20:34.085293 systemd-networkd[1425]: lxc_health: Gained carrier Jul 15 23:20:35.416993 systemd-networkd[1425]: lxc_health: Gained IPv6LL Jul 15 23:20:35.466096 kubelet[2633]: E0715 23:20:35.466012 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:35.485640 kubelet[2633]: I0715 23:20:35.485581 2633 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9mz7r" podStartSLOduration=8.485564065 podStartE2EDuration="8.485564065s" podCreationTimestamp="2025-07-15 23:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 23:20:31.877192619 +0000 UTC m=+83.353939080" watchObservedRunningTime="2025-07-15 23:20:35.485564065 +0000 UTC m=+86.962310486" Jul 15 23:20:35.780625 containerd[1497]: time="2025-07-15T23:20:35.780514976Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\" id:\"c4805c30dd2216442b4d1c0600562b2210b6240e9d7d7ef26dfcf68a4c04f41a\" pid:5208 exited_at:{seconds:1752621635 nanos:779641796}" Jul 15 23:20:35.782507 kubelet[2633]: E0715 23:20:35.782392 2633 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46682->127.0.0.1:42081: write tcp 127.0.0.1:46682->127.0.0.1:42081: write: connection reset by peer Jul 15 23:20:35.870297 kubelet[2633]: E0715 23:20:35.869925 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:36.620556 kubelet[2633]: E0715 23:20:36.620127 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:36.620556 kubelet[2633]: E0715 23:20:36.620207 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:36.871930 kubelet[2633]: E0715 23:20:36.871805 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:37.918695 containerd[1497]: time="2025-07-15T23:20:37.918651449Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\" id:\"7b9cb99d0f639d8780f9189dd7f9cbefde8cf7df63c0df881885c3ec61231a89\" pid:5242 exited_at:{seconds:1752621637 nanos:918329974}" Jul 15 23:20:39.618505 kubelet[2633]: E0715 23:20:39.618413 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 15 23:20:40.043346 containerd[1497]: time="2025-07-15T23:20:40.043085207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"567201797575aadb8c686a19aab1bd049d228dc64f95f6d8d846a4c0fb7a3ee3\" id:\"dabd34e79ebba32738ea728891a3271aa7df64ab6f6112bbdc5dae5466921b94\" pid:5266 exited_at:{seconds:1752621640 nanos:42613212}" Jul 15 23:20:40.048507 sshd[4405]: Connection closed by 10.0.0.1 port 43492 Jul 15 23:20:40.050218 sshd-session[4399]: pam_unix(sshd:session): session closed for user core Jul 15 23:20:40.053636 systemd[1]: sshd@25-10.0.0.74:22-10.0.0.1:43492.service: Deactivated successfully. Jul 15 23:20:40.055494 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 23:20:40.056347 systemd-logind[1473]: Session 26 logged out. Waiting for processes to exit. Jul 15 23:20:40.058727 systemd-logind[1473]: Removed session 26. Jul 15 23:20:40.618391 kubelet[2633]: E0715 23:20:40.618289 2633 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"