Jul 9 23:49:10.817910 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 23:49:10.817932 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Jul 9 22:19:33 -00 2025 Jul 9 23:49:10.817943 kernel: KASLR enabled Jul 9 23:49:10.817949 kernel: efi: EFI v2.7 by EDK II Jul 9 23:49:10.817955 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Jul 9 23:49:10.817960 kernel: random: crng init done Jul 9 23:49:10.817968 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 9 23:49:10.817973 kernel: secureboot: Secure boot enabled Jul 9 23:49:10.817979 kernel: ACPI: Early table checksum verification disabled Jul 9 23:49:10.817987 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Jul 9 23:49:10.817993 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 23:49:10.818000 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818006 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818012 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818020 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818027 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818034 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818040 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818046 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818052 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 23:49:10.818059 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 23:49:10.818065 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 9 23:49:10.818071 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:49:10.818078 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] Jul 9 23:49:10.818084 kernel: Zone ranges: Jul 9 23:49:10.818091 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:49:10.818098 kernel: DMA32 empty Jul 9 23:49:10.818104 kernel: Normal empty Jul 9 23:49:10.818110 kernel: Device empty Jul 9 23:49:10.818116 kernel: Movable zone start for each node Jul 9 23:49:10.818122 kernel: Early memory node ranges Jul 9 23:49:10.818128 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Jul 9 23:49:10.818135 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Jul 9 23:49:10.818141 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Jul 9 23:49:10.818148 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Jul 9 23:49:10.818154 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Jul 9 23:49:10.818160 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Jul 9 23:49:10.818167 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Jul 9 23:49:10.818174 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Jul 9 23:49:10.818180 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 23:49:10.818189 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 23:49:10.818196 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 23:49:10.818203 kernel: psci: probing for conduit method from ACPI. Jul 9 23:49:10.818210 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 23:49:10.818218 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 23:49:10.818224 kernel: psci: Trusted OS migration not required Jul 9 23:49:10.818231 kernel: psci: SMC Calling Convention v1.1 Jul 9 23:49:10.818237 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 23:49:10.818244 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 9 23:49:10.818251 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 9 23:49:10.818258 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 23:49:10.818264 kernel: Detected PIPT I-cache on CPU0 Jul 9 23:49:10.818271 kernel: CPU features: detected: GIC system register CPU interface Jul 9 23:49:10.818279 kernel: CPU features: detected: Spectre-v4 Jul 9 23:49:10.818285 kernel: CPU features: detected: Spectre-BHB Jul 9 23:49:10.818292 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 23:49:10.818298 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 23:49:10.818305 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 23:49:10.818320 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 23:49:10.818327 kernel: alternatives: applying boot alternatives Jul 9 23:49:10.818335 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:49:10.818342 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 23:49:10.818349 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 23:49:10.818355 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 23:49:10.818364 kernel: Fallback order for Node 0: 0 Jul 9 23:49:10.818371 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 9 23:49:10.818378 kernel: Policy zone: DMA Jul 9 23:49:10.818384 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 23:49:10.818391 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 9 23:49:10.818397 kernel: software IO TLB: area num 4. Jul 9 23:49:10.818404 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 9 23:49:10.818410 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Jul 9 23:49:10.818417 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 23:49:10.818424 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 23:49:10.818431 kernel: rcu: RCU event tracing is enabled. Jul 9 23:49:10.818447 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 23:49:10.818457 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 23:49:10.818464 kernel: Tracing variant of Tasks RCU enabled. Jul 9 23:49:10.818470 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 23:49:10.818477 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 23:49:10.818484 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:49:10.818491 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 23:49:10.818497 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 23:49:10.818504 kernel: GICv3: 256 SPIs implemented Jul 9 23:49:10.818511 kernel: GICv3: 0 Extended SPIs implemented Jul 9 23:49:10.818517 kernel: Root IRQ handler: gic_handle_irq Jul 9 23:49:10.818524 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 23:49:10.818532 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 9 23:49:10.818538 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 23:49:10.818545 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 23:49:10.818552 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 9 23:49:10.818558 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 9 23:49:10.818565 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 9 23:49:10.818572 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 9 23:49:10.818579 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 23:49:10.818585 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:49:10.818592 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 23:49:10.818598 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 23:49:10.818605 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 23:49:10.818630 kernel: arm-pv: using stolen time PV Jul 9 23:49:10.818637 kernel: Console: colour dummy device 80x25 Jul 9 23:49:10.818643 kernel: ACPI: Core revision 20240827 Jul 9 23:49:10.818656 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 23:49:10.818664 kernel: pid_max: default: 32768 minimum: 301 Jul 9 23:49:10.818671 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 9 23:49:10.818677 kernel: landlock: Up and running. Jul 9 23:49:10.818684 kernel: SELinux: Initializing. Jul 9 23:49:10.818691 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:49:10.818699 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 23:49:10.818706 kernel: rcu: Hierarchical SRCU implementation. Jul 9 23:49:10.818713 kernel: rcu: Max phase no-delay instances is 400. Jul 9 23:49:10.818731 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 9 23:49:10.818737 kernel: Remapping and enabling EFI services. Jul 9 23:49:10.818744 kernel: smp: Bringing up secondary CPUs ... Jul 9 23:49:10.818751 kernel: Detected PIPT I-cache on CPU1 Jul 9 23:49:10.818758 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 23:49:10.818765 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 9 23:49:10.818773 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:49:10.818785 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 23:49:10.818791 kernel: Detected PIPT I-cache on CPU2 Jul 9 23:49:10.818800 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 23:49:10.818807 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 9 23:49:10.818814 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:49:10.818821 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 23:49:10.818828 kernel: Detected PIPT I-cache on CPU3 Jul 9 23:49:10.818835 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 23:49:10.818843 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 9 23:49:10.818850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 23:49:10.818857 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 23:49:10.818864 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 23:49:10.818871 kernel: SMP: Total of 4 processors activated. Jul 9 23:49:10.818879 kernel: CPU: All CPU(s) started at EL1 Jul 9 23:49:10.818885 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 23:49:10.818893 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 23:49:10.818900 kernel: CPU features: detected: Common not Private translations Jul 9 23:49:10.818909 kernel: CPU features: detected: CRC32 instructions Jul 9 23:49:10.818916 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 23:49:10.818922 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 23:49:10.818930 kernel: CPU features: detected: LSE atomic instructions Jul 9 23:49:10.818937 kernel: CPU features: detected: Privileged Access Never Jul 9 23:49:10.818944 kernel: CPU features: detected: RAS Extension Support Jul 9 23:49:10.818951 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 23:49:10.818958 kernel: alternatives: applying system-wide alternatives Jul 9 23:49:10.818965 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 9 23:49:10.818974 kernel: Memory: 2438320K/2572288K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 128020K reserved, 0K cma-reserved) Jul 9 23:49:10.818981 kernel: devtmpfs: initialized Jul 9 23:49:10.818989 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 23:49:10.818996 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 23:49:10.819003 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 23:49:10.819010 kernel: 0 pages in range for non-PLT usage Jul 9 23:49:10.819017 kernel: 508448 pages in range for PLT usage Jul 9 23:49:10.819024 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 23:49:10.819031 kernel: SMBIOS 3.0.0 present. Jul 9 23:49:10.819039 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 23:49:10.819046 kernel: DMI: Memory slots populated: 1/1 Jul 9 23:49:10.819053 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 23:49:10.819061 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 23:49:10.819068 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 23:49:10.819076 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 23:49:10.819083 kernel: audit: initializing netlink subsys (disabled) Jul 9 23:49:10.819090 kernel: audit: type=2000 audit(0.038:1): state=initialized audit_enabled=0 res=1 Jul 9 23:49:10.819099 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 23:49:10.819106 kernel: cpuidle: using governor menu Jul 9 23:49:10.819113 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 23:49:10.819120 kernel: ASID allocator initialised with 32768 entries Jul 9 23:49:10.819127 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 23:49:10.819134 kernel: Serial: AMBA PL011 UART driver Jul 9 23:49:10.819141 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 23:49:10.819149 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 23:49:10.819155 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 23:49:10.819164 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 23:49:10.819170 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 23:49:10.819178 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 23:49:10.819185 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 23:49:10.819192 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 23:49:10.819199 kernel: ACPI: Added _OSI(Module Device) Jul 9 23:49:10.819206 kernel: ACPI: Added _OSI(Processor Device) Jul 9 23:49:10.819213 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 23:49:10.819220 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 23:49:10.819227 kernel: ACPI: Interpreter enabled Jul 9 23:49:10.819236 kernel: ACPI: Using GIC for interrupt routing Jul 9 23:49:10.819243 kernel: ACPI: MCFG table detected, 1 entries Jul 9 23:49:10.819250 kernel: ACPI: CPU0 has been hot-added Jul 9 23:49:10.819257 kernel: ACPI: CPU1 has been hot-added Jul 9 23:49:10.819264 kernel: ACPI: CPU2 has been hot-added Jul 9 23:49:10.819271 kernel: ACPI: CPU3 has been hot-added Jul 9 23:49:10.819278 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 23:49:10.819285 kernel: printk: legacy console [ttyAMA0] enabled Jul 9 23:49:10.819292 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 23:49:10.819461 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 23:49:10.819530 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 23:49:10.819588 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 23:49:10.819646 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 23:49:10.819706 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 23:49:10.819717 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 23:49:10.819724 kernel: PCI host bridge to bus 0000:00 Jul 9 23:49:10.819803 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 23:49:10.819866 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 23:49:10.819921 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 23:49:10.819987 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 23:49:10.820066 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 9 23:49:10.820141 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 9 23:49:10.820206 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 9 23:49:10.820268 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 9 23:49:10.820340 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 23:49:10.820403 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 9 23:49:10.820475 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 9 23:49:10.820536 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 9 23:49:10.820591 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 23:49:10.820648 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 23:49:10.820701 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 23:49:10.820711 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 23:49:10.820718 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 23:49:10.820726 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 23:49:10.820733 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 23:49:10.820740 kernel: iommu: Default domain type: Translated Jul 9 23:49:10.820748 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 23:49:10.820757 kernel: efivars: Registered efivars operations Jul 9 23:49:10.820765 kernel: vgaarb: loaded Jul 9 23:49:10.820772 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 23:49:10.820779 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 23:49:10.820787 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 23:49:10.820794 kernel: pnp: PnP ACPI init Jul 9 23:49:10.820865 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 23:49:10.820875 kernel: pnp: PnP ACPI: found 1 devices Jul 9 23:49:10.820882 kernel: NET: Registered PF_INET protocol family Jul 9 23:49:10.820892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 23:49:10.820899 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 23:49:10.820906 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 23:49:10.820914 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 23:49:10.820922 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 23:49:10.820929 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 23:49:10.820936 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:49:10.820944 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 23:49:10.820951 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 23:49:10.820960 kernel: PCI: CLS 0 bytes, default 64 Jul 9 23:49:10.820967 kernel: kvm [1]: HYP mode not available Jul 9 23:49:10.820975 kernel: Initialise system trusted keyrings Jul 9 23:49:10.820982 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 23:49:10.820989 kernel: Key type asymmetric registered Jul 9 23:49:10.820996 kernel: Asymmetric key parser 'x509' registered Jul 9 23:49:10.821004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 9 23:49:10.821011 kernel: io scheduler mq-deadline registered Jul 9 23:49:10.821018 kernel: io scheduler kyber registered Jul 9 23:49:10.821027 kernel: io scheduler bfq registered Jul 9 23:49:10.821034 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 23:49:10.821042 kernel: ACPI: button: Power Button [PWRB] Jul 9 23:49:10.821050 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 23:49:10.821114 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 23:49:10.821124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 23:49:10.821131 kernel: thunder_xcv, ver 1.0 Jul 9 23:49:10.821138 kernel: thunder_bgx, ver 1.0 Jul 9 23:49:10.821145 kernel: nicpf, ver 1.0 Jul 9 23:49:10.821155 kernel: nicvf, ver 1.0 Jul 9 23:49:10.821225 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 23:49:10.821282 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T23:49:10 UTC (1752104950) Jul 9 23:49:10.821292 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 23:49:10.821299 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 9 23:49:10.821307 kernel: watchdog: NMI not fully supported Jul 9 23:49:10.821322 kernel: watchdog: Hard watchdog permanently disabled Jul 9 23:49:10.821329 kernel: NET: Registered PF_INET6 protocol family Jul 9 23:49:10.821340 kernel: Segment Routing with IPv6 Jul 9 23:49:10.821347 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 23:49:10.821355 kernel: NET: Registered PF_PACKET protocol family Jul 9 23:49:10.821362 kernel: Key type dns_resolver registered Jul 9 23:49:10.821369 kernel: registered taskstats version 1 Jul 9 23:49:10.821377 kernel: Loading compiled-in X.509 certificates Jul 9 23:49:10.821384 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 11eff9deb028731c4f89f27f6fac8d1c08902e5a' Jul 9 23:49:10.821391 kernel: Demotion targets for Node 0: null Jul 9 23:49:10.821398 kernel: Key type .fscrypt registered Jul 9 23:49:10.821408 kernel: Key type fscrypt-provisioning registered Jul 9 23:49:10.821415 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 23:49:10.821422 kernel: ima: Allocated hash algorithm: sha1 Jul 9 23:49:10.821429 kernel: ima: No architecture policies found Jul 9 23:49:10.821445 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 23:49:10.821453 kernel: clk: Disabling unused clocks Jul 9 23:49:10.821460 kernel: PM: genpd: Disabling unused power domains Jul 9 23:49:10.821467 kernel: Warning: unable to open an initial console. Jul 9 23:49:10.821475 kernel: Freeing unused kernel memory: 39488K Jul 9 23:49:10.821484 kernel: Run /init as init process Jul 9 23:49:10.821491 kernel: with arguments: Jul 9 23:49:10.821498 kernel: /init Jul 9 23:49:10.821505 kernel: with environment: Jul 9 23:49:10.821512 kernel: HOME=/ Jul 9 23:49:10.821519 kernel: TERM=linux Jul 9 23:49:10.821526 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 23:49:10.821534 systemd[1]: Successfully made /usr/ read-only. Jul 9 23:49:10.821545 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:49:10.821554 systemd[1]: Detected virtualization kvm. Jul 9 23:49:10.821562 systemd[1]: Detected architecture arm64. Jul 9 23:49:10.821569 systemd[1]: Running in initrd. Jul 9 23:49:10.821577 systemd[1]: No hostname configured, using default hostname. Jul 9 23:49:10.821585 systemd[1]: Hostname set to . Jul 9 23:49:10.821592 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:49:10.821600 systemd[1]: Queued start job for default target initrd.target. Jul 9 23:49:10.821610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:49:10.821618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:49:10.821626 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 23:49:10.821634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:49:10.821642 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 23:49:10.821650 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 23:49:10.821663 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 23:49:10.821671 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 23:49:10.821679 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:49:10.821687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:49:10.821696 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:49:10.821706 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:49:10.821716 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:49:10.821724 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:49:10.821732 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:49:10.821742 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:49:10.821750 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 23:49:10.821758 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 23:49:10.821766 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:49:10.821774 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:49:10.821782 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:49:10.821789 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:49:10.821797 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 23:49:10.821807 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:49:10.821815 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 23:49:10.821823 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 9 23:49:10.821831 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 23:49:10.821839 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:49:10.821847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:49:10.821855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:49:10.821863 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 23:49:10.821873 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:49:10.821881 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 23:49:10.821913 systemd-journald[246]: Collecting audit messages is disabled. Jul 9 23:49:10.821936 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 23:49:10.821945 systemd-journald[246]: Journal started Jul 9 23:49:10.821964 systemd-journald[246]: Runtime Journal (/run/log/journal/b894be9a182346ea881c78c4e3982305) is 6M, max 48.5M, 42.4M free. Jul 9 23:49:10.818697 systemd-modules-load[247]: Inserted module 'overlay' Jul 9 23:49:10.825862 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:49:10.830899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:49:10.835454 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 23:49:10.841020 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 23:49:10.837304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:49:10.843848 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 9 23:49:10.844845 kernel: Bridge firewalling registered Jul 9 23:49:10.845687 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 23:49:10.847300 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:49:10.851501 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:49:10.853343 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:49:10.854516 systemd-tmpfiles[265]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 9 23:49:10.862610 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:49:10.864377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:49:10.867573 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 23:49:10.871673 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:49:10.876514 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:49:10.879097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:49:10.883250 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=da23c3aa7de24c290e5e9aff0a0fccd6a322ecaa9bbfc71c29b2f39446459116 Jul 9 23:49:10.927320 systemd-resolved[295]: Positive Trust Anchors: Jul 9 23:49:10.927340 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:49:10.927371 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:49:10.932918 systemd-resolved[295]: Defaulting to hostname 'linux'. Jul 9 23:49:10.934597 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:49:10.937652 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:49:10.975472 kernel: SCSI subsystem initialized Jul 9 23:49:10.980452 kernel: Loading iSCSI transport class v2.0-870. Jul 9 23:49:10.989466 kernel: iscsi: registered transport (tcp) Jul 9 23:49:11.005812 kernel: iscsi: registered transport (qla4xxx) Jul 9 23:49:11.005859 kernel: QLogic iSCSI HBA Driver Jul 9 23:49:11.023529 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:49:11.045493 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:49:11.047168 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:49:11.096026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 23:49:11.098532 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 23:49:11.164471 kernel: raid6: neonx8 gen() 15811 MB/s Jul 9 23:49:11.181469 kernel: raid6: neonx4 gen() 15837 MB/s Jul 9 23:49:11.198467 kernel: raid6: neonx2 gen() 13209 MB/s Jul 9 23:49:11.215467 kernel: raid6: neonx1 gen() 10429 MB/s Jul 9 23:49:11.232459 kernel: raid6: int64x8 gen() 6899 MB/s Jul 9 23:49:11.249468 kernel: raid6: int64x4 gen() 7340 MB/s Jul 9 23:49:11.266469 kernel: raid6: int64x2 gen() 6099 MB/s Jul 9 23:49:11.283586 kernel: raid6: int64x1 gen() 5049 MB/s Jul 9 23:49:11.283610 kernel: raid6: using algorithm neonx4 gen() 15837 MB/s Jul 9 23:49:11.301597 kernel: raid6: .... xor() 12410 MB/s, rmw enabled Jul 9 23:49:11.301632 kernel: raid6: using neon recovery algorithm Jul 9 23:49:11.307844 kernel: xor: measuring software checksum speed Jul 9 23:49:11.307875 kernel: 8regs : 21630 MB/sec Jul 9 23:49:11.307884 kernel: 32regs : 20886 MB/sec Jul 9 23:49:11.308502 kernel: arm64_neon : 27879 MB/sec Jul 9 23:49:11.308522 kernel: xor: using function: arm64_neon (27879 MB/sec) Jul 9 23:49:11.367449 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 23:49:11.374653 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:49:11.377321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:49:11.410974 systemd-udevd[498]: Using default interface naming scheme 'v255'. Jul 9 23:49:11.415110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:49:11.417855 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 23:49:11.443247 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jul 9 23:49:11.466702 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:49:11.469231 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:49:11.521287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:49:11.526430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 23:49:11.576471 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 23:49:11.578870 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 23:49:11.584795 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 23:49:11.584845 kernel: GPT:9289727 != 19775487 Jul 9 23:49:11.584855 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 23:49:11.584864 kernel: GPT:9289727 != 19775487 Jul 9 23:49:11.586088 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 23:49:11.589459 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:49:11.595655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:49:11.595781 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:49:11.599390 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:49:11.604616 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:49:11.625341 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 23:49:11.628793 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 23:49:11.631547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:49:11.649886 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 23:49:11.657673 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:49:11.664048 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 23:49:11.665322 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 23:49:11.668411 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:49:11.670748 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:49:11.673029 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:49:11.676038 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 23:49:11.678108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 23:49:11.694632 disk-uuid[589]: Primary Header is updated. Jul 9 23:49:11.694632 disk-uuid[589]: Secondary Entries is updated. Jul 9 23:49:11.694632 disk-uuid[589]: Secondary Header is updated. Jul 9 23:49:11.700449 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:49:11.700987 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:49:12.715463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 23:49:12.717665 disk-uuid[594]: The operation has completed successfully. Jul 9 23:49:12.748981 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 23:49:12.749083 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 23:49:12.769778 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 23:49:12.800366 sh[610]: Success Jul 9 23:49:12.822729 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 23:49:12.822778 kernel: device-mapper: uevent: version 1.0.3 Jul 9 23:49:12.825462 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 9 23:49:12.850203 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 9 23:49:12.884699 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 23:49:12.897163 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 23:49:12.899987 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 23:49:12.911483 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 9 23:49:12.911544 kernel: BTRFS: device fsid 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (622) Jul 9 23:49:12.913096 kernel: BTRFS info (device dm-0): first mount of filesystem 0f8170d9-c2a5-4c49-82bc-4e538bfc9b9b Jul 9 23:49:12.914155 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:49:12.914913 kernel: BTRFS info (device dm-0): using free-space-tree Jul 9 23:49:12.918930 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 23:49:12.920407 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:49:12.921890 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 23:49:12.922829 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 23:49:12.926388 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 23:49:12.947479 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 9 23:49:12.949808 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:49:12.949867 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:49:12.949885 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:49:12.957451 kernel: BTRFS info (device vda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:49:12.959517 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 23:49:12.962225 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 23:49:13.034290 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:49:13.040651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:49:13.098604 systemd-networkd[793]: lo: Link UP Jul 9 23:49:13.098618 systemd-networkd[793]: lo: Gained carrier Jul 9 23:49:13.099313 systemd-networkd[793]: Enumeration completed Jul 9 23:49:13.099981 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:49:13.100522 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:49:13.100525 systemd-networkd[793]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:49:13.101239 systemd-networkd[793]: eth0: Link UP Jul 9 23:49:13.101242 systemd-networkd[793]: eth0: Gained carrier Jul 9 23:49:13.101249 systemd-networkd[793]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:49:13.102604 systemd[1]: Reached target network.target - Network. Jul 9 23:49:13.124503 systemd-networkd[793]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:49:13.166501 ignition[702]: Ignition 2.21.0 Jul 9 23:49:13.166512 ignition[702]: Stage: fetch-offline Jul 9 23:49:13.166556 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:49:13.166563 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:49:13.166757 ignition[702]: parsed url from cmdline: "" Jul 9 23:49:13.166761 ignition[702]: no config URL provided Jul 9 23:49:13.166765 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 23:49:13.166771 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jul 9 23:49:13.166799 ignition[702]: op(1): [started] loading QEMU firmware config module Jul 9 23:49:13.166804 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 23:49:13.180625 ignition[702]: op(1): [finished] loading QEMU firmware config module Jul 9 23:49:13.219258 ignition[702]: parsing config with SHA512: 72b5c132c807b3cbba9def4cadaf9de77727eab44238a50e4045ed126f5595c376106d686836a81cb817a73c58bb5fcda0f3707d626c79feecf905c517c4ffc1 Jul 9 23:49:13.226272 unknown[702]: fetched base config from "system" Jul 9 23:49:13.226284 unknown[702]: fetched user config from "qemu" Jul 9 23:49:13.226749 ignition[702]: fetch-offline: fetch-offline passed Jul 9 23:49:13.226894 systemd-resolved[295]: Detected conflict on linux IN A 10.0.0.74 Jul 9 23:49:13.226803 ignition[702]: Ignition finished successfully Jul 9 23:49:13.226902 systemd-resolved[295]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 9 23:49:13.229247 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:49:13.231580 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 23:49:13.232384 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 23:49:13.269453 ignition[807]: Ignition 2.21.0 Jul 9 23:49:13.269468 ignition[807]: Stage: kargs Jul 9 23:49:13.269642 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:49:13.269652 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:49:13.270959 ignition[807]: kargs: kargs passed Jul 9 23:49:13.271021 ignition[807]: Ignition finished successfully Jul 9 23:49:13.275298 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 23:49:13.278591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 23:49:13.306822 ignition[815]: Ignition 2.21.0 Jul 9 23:49:13.306836 ignition[815]: Stage: disks Jul 9 23:49:13.306983 ignition[815]: no configs at "/usr/lib/ignition/base.d" Jul 9 23:49:13.306992 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:49:13.308466 ignition[815]: disks: disks passed Jul 9 23:49:13.310988 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 23:49:13.308555 ignition[815]: Ignition finished successfully Jul 9 23:49:13.312658 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 23:49:13.314507 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 23:49:13.316353 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:49:13.318351 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:49:13.320386 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:49:13.323003 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 23:49:13.352288 systemd-fsck[825]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 9 23:49:13.358705 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 23:49:13.362762 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 23:49:13.440463 kernel: EXT4-fs (vda9): mounted filesystem 961fd3ec-635c-4a87-8aef-ca8f12cd8be8 r/w with ordered data mode. Quota mode: none. Jul 9 23:49:13.441160 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 23:49:13.442609 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 23:49:13.445066 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:49:13.446890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 23:49:13.447973 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 23:49:13.448031 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 23:49:13.448055 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:49:13.461098 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 23:49:13.463792 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 23:49:13.466910 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (833) Jul 9 23:49:13.471687 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:49:13.471734 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:49:13.471745 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:49:13.475950 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:49:13.527080 initrd-setup-root[858]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 23:49:13.530387 initrd-setup-root[865]: cut: /sysroot/etc/group: No such file or directory Jul 9 23:49:13.533952 initrd-setup-root[872]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 23:49:13.538623 initrd-setup-root[879]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 23:49:13.615501 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 23:49:13.618558 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 23:49:13.620103 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 23:49:13.634468 kernel: BTRFS info (device vda6): last unmount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:49:13.647908 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 23:49:13.652329 ignition[951]: INFO : Ignition 2.21.0 Jul 9 23:49:13.652329 ignition[951]: INFO : Stage: mount Jul 9 23:49:13.654916 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:49:13.654916 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:49:13.654916 ignition[951]: INFO : mount: mount passed Jul 9 23:49:13.654916 ignition[951]: INFO : Ignition finished successfully Jul 9 23:49:13.655604 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 23:49:13.657815 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 23:49:13.909950 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 23:49:13.911457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 23:49:13.936467 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (964) Jul 9 23:49:13.938837 kernel: BTRFS info (device vda6): first mount of filesystem 3e5253a1-0691-476f-bde5-7794093008ce Jul 9 23:49:13.938873 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 23:49:13.938897 kernel: BTRFS info (device vda6): using free-space-tree Jul 9 23:49:13.942160 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 23:49:13.971188 ignition[981]: INFO : Ignition 2.21.0 Jul 9 23:49:13.973141 ignition[981]: INFO : Stage: files Jul 9 23:49:13.973141 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:49:13.973141 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:49:13.976338 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Jul 9 23:49:13.976338 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 23:49:13.976338 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 23:49:13.980374 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 23:49:13.980374 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 23:49:13.980374 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 23:49:13.980055 unknown[981]: wrote ssh authorized keys file for user: core Jul 9 23:49:13.985524 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 23:49:13.985524 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 9 23:49:14.095538 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 23:49:14.190547 systemd-networkd[793]: eth0: Gained IPv6LL Jul 9 23:49:14.518143 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 23:49:14.518143 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:49:14.522221 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 23:49:14.861404 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 23:49:14.958963 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 23:49:14.958963 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:49:14.963167 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 9 23:49:15.339398 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 23:49:15.652876 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 23:49:15.652876 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 23:49:15.656356 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:49:15.690657 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 23:49:15.690657 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 23:49:15.690657 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 9 23:49:15.696040 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:49:15.696040 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 23:49:15.696040 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 9 23:49:15.696040 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 23:49:15.712961 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:49:15.716992 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 23:49:15.719668 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 23:49:15.719668 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 9 23:49:15.719668 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 23:49:15.719668 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:49:15.719668 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 23:49:15.719668 ignition[981]: INFO : files: files passed Jul 9 23:49:15.719668 ignition[981]: INFO : Ignition finished successfully Jul 9 23:49:15.720368 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 23:49:15.723169 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 23:49:15.727708 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 23:49:15.736981 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 23:49:15.738002 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 23:49:15.738482 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 23:49:15.741779 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:49:15.741779 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:49:15.744854 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 23:49:15.745490 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:49:15.747997 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 23:49:15.750700 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 23:49:15.806767 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 23:49:15.806892 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 23:49:15.809520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 23:49:15.811399 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 23:49:15.813369 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 23:49:15.814301 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 23:49:15.837550 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:49:15.840661 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 23:49:15.873012 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:49:15.874406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:49:15.876652 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 23:49:15.878468 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 23:49:15.878671 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 23:49:15.883146 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 23:49:15.885156 systemd[1]: Stopped target basic.target - Basic System. Jul 9 23:49:15.886784 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 23:49:15.888606 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 23:49:15.890686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 23:49:15.892745 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 9 23:49:15.894639 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 23:49:15.896496 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 23:49:15.898539 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 23:49:15.900524 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 23:49:15.902410 systemd[1]: Stopped target swap.target - Swaps. Jul 9 23:49:15.904065 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 23:49:15.904202 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 23:49:15.906650 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:49:15.908818 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:49:15.910745 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 23:49:15.911650 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:49:15.912919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 23:49:15.913052 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 23:49:15.916012 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 23:49:15.916156 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 23:49:15.918462 systemd[1]: Stopped target paths.target - Path Units. Jul 9 23:49:15.920155 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 23:49:15.920977 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:49:15.922307 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 23:49:15.924155 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 23:49:15.925802 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 23:49:15.925912 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 23:49:15.929934 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 23:49:15.930031 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 23:49:15.932742 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 23:49:15.932872 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 23:49:15.934714 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 23:49:15.934821 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 23:49:15.938779 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 23:49:15.940215 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 23:49:15.940360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:49:15.945919 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 23:49:15.947344 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 23:49:15.947523 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:49:15.949633 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 23:49:15.949743 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 23:49:15.957462 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 23:49:15.966635 kernel: hrtimer: interrupt took 3211680 ns Jul 9 23:49:15.957557 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 23:49:15.972381 ignition[1036]: INFO : Ignition 2.21.0 Jul 9 23:49:15.972381 ignition[1036]: INFO : Stage: umount Jul 9 23:49:15.975072 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 23:49:15.975072 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 23:49:15.975072 ignition[1036]: INFO : umount: umount passed Jul 9 23:49:15.975072 ignition[1036]: INFO : Ignition finished successfully Jul 9 23:49:15.976395 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 23:49:15.976502 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 23:49:15.980619 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 23:49:15.981460 systemd[1]: Stopped target network.target - Network. Jul 9 23:49:15.983456 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 23:49:15.983544 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 23:49:15.985321 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 23:49:15.985382 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 23:49:15.987008 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 23:49:15.987066 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 23:49:15.988664 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 23:49:15.988713 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 23:49:15.991094 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 23:49:15.992494 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 23:49:15.994844 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 23:49:15.994942 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 23:49:15.997159 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 23:49:15.997305 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 23:49:16.002816 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 23:49:16.002968 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 23:49:16.008324 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 23:49:16.008570 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 23:49:16.008683 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 23:49:16.013606 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 23:49:16.014190 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 9 23:49:16.016179 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 23:49:16.016231 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:49:16.020576 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 23:49:16.021945 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 23:49:16.022009 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 23:49:16.024106 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:49:16.024154 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:49:16.027186 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 23:49:16.027235 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 23:49:16.029193 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 23:49:16.029244 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:49:16.036791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:49:16.041697 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 23:49:16.041777 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:49:16.058518 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 23:49:16.060669 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:49:16.062695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 23:49:16.062749 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 23:49:16.064327 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 23:49:16.064367 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:49:16.066358 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 23:49:16.066430 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 23:49:16.069392 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 23:49:16.069477 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 23:49:16.072372 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 23:49:16.072458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 23:49:16.076315 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 23:49:16.078142 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 9 23:49:16.078224 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:49:16.081997 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 23:49:16.082055 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:49:16.086277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 23:49:16.086341 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:49:16.090977 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 9 23:49:16.091039 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 9 23:49:16.091076 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 9 23:49:16.091365 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 23:49:16.093628 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 23:49:16.099720 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 23:49:16.099960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 23:49:16.102267 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 23:49:16.105153 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 23:49:16.120819 systemd[1]: Switching root. Jul 9 23:49:16.166789 systemd-journald[246]: Journal stopped Jul 9 23:49:17.047679 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Jul 9 23:49:17.047740 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 23:49:17.047752 kernel: SELinux: policy capability open_perms=1 Jul 9 23:49:17.047761 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 23:49:17.047770 kernel: SELinux: policy capability always_check_network=0 Jul 9 23:49:17.047779 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 23:49:17.047791 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 23:49:17.047801 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 23:49:17.047813 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 23:49:17.047828 kernel: SELinux: policy capability userspace_initial_context=0 Jul 9 23:49:17.047838 kernel: audit: type=1403 audit(1752104956.374:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 23:49:17.047853 systemd[1]: Successfully loaded SELinux policy in 51.799ms. Jul 9 23:49:17.047873 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.628ms. Jul 9 23:49:17.047884 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 23:49:17.047895 systemd[1]: Detected virtualization kvm. Jul 9 23:49:17.047905 systemd[1]: Detected architecture arm64. Jul 9 23:49:17.047915 systemd[1]: Detected first boot. Jul 9 23:49:17.047926 systemd[1]: Initializing machine ID from VM UUID. Jul 9 23:49:17.047936 kernel: NET: Registered PF_VSOCK protocol family Jul 9 23:49:17.047945 zram_generator::config[1081]: No configuration found. Jul 9 23:49:17.047956 systemd[1]: Populated /etc with preset unit settings. Jul 9 23:49:17.047967 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 23:49:17.047978 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 23:49:17.047987 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 23:49:17.047998 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 23:49:17.048009 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 23:49:17.048021 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 23:49:17.048032 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 23:49:17.048042 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 23:49:17.048052 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 23:49:17.048062 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 23:49:17.048072 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 23:49:17.048082 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 23:49:17.048093 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 23:49:17.048105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 23:49:17.048116 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 23:49:17.048126 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 23:49:17.048136 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 23:49:17.048146 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 23:49:17.048157 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 23:49:17.048168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 23:49:17.048178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 23:49:17.048190 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 23:49:17.048200 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 23:49:17.048210 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 23:49:17.048221 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 23:49:17.048231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 23:49:17.048245 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 23:49:17.048255 systemd[1]: Reached target slices.target - Slice Units. Jul 9 23:49:17.048265 systemd[1]: Reached target swap.target - Swaps. Jul 9 23:49:17.048275 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 23:49:17.048287 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 23:49:17.048308 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 23:49:17.048320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 23:49:17.048330 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 23:49:17.048341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 23:49:17.048351 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 23:49:17.048363 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 23:49:17.048372 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 23:49:17.048382 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 23:49:17.048395 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 23:49:17.048405 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 23:49:17.048415 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 23:49:17.048425 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 23:49:17.048445 systemd[1]: Reached target machines.target - Containers. Jul 9 23:49:17.048456 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 23:49:17.048467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:49:17.048477 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 23:49:17.048490 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 23:49:17.048501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:49:17.048512 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:49:17.048522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:49:17.048532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 23:49:17.048542 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:49:17.048553 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 23:49:17.048563 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 23:49:17.048573 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 23:49:17.048586 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 23:49:17.048596 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 23:49:17.048607 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:49:17.048616 kernel: loop: module loaded Jul 9 23:49:17.048638 kernel: fuse: init (API version 7.41) Jul 9 23:49:17.048654 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 23:49:17.048664 kernel: ACPI: bus type drm_connector registered Jul 9 23:49:17.048675 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 23:49:17.048686 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 23:49:17.048725 systemd-journald[1142]: Collecting audit messages is disabled. Jul 9 23:49:17.048766 systemd-journald[1142]: Journal started Jul 9 23:49:17.048789 systemd-journald[1142]: Runtime Journal (/run/log/journal/b894be9a182346ea881c78c4e3982305) is 6M, max 48.5M, 42.4M free. Jul 9 23:49:16.796651 systemd[1]: Queued start job for default target multi-user.target. Jul 9 23:49:16.817402 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 23:49:16.817797 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 23:49:17.051843 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 23:49:17.057049 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 23:49:17.062634 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 23:49:17.062704 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 23:49:17.062720 systemd[1]: Stopped verity-setup.service. Jul 9 23:49:17.068551 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 23:49:17.069976 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 23:49:17.071200 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 23:49:17.072487 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 23:49:17.073609 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 23:49:17.074987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 23:49:17.076318 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 23:49:17.079473 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 23:49:17.081193 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 23:49:17.081380 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 23:49:17.082959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:49:17.083134 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:49:17.084831 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:49:17.084997 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:49:17.086403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:49:17.086581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:49:17.088076 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 23:49:17.089494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 23:49:17.090918 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:49:17.091083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:49:17.092765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 23:49:17.094320 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 23:49:17.097937 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 23:49:17.099831 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 23:49:17.113564 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 23:49:17.116305 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 23:49:17.120712 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 23:49:17.121880 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 23:49:17.121930 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 23:49:17.124093 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 23:49:17.128105 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 23:49:17.129258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:49:17.131006 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 23:49:17.133424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 23:49:17.134888 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:49:17.137616 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 23:49:17.139025 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:49:17.141201 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:49:17.143704 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 23:49:17.150671 systemd-journald[1142]: Time spent on flushing to /var/log/journal/b894be9a182346ea881c78c4e3982305 is 12.850ms for 886 entries. Jul 9 23:49:17.150671 systemd-journald[1142]: System Journal (/var/log/journal/b894be9a182346ea881c78c4e3982305) is 8M, max 195.6M, 187.6M free. Jul 9 23:49:17.179777 systemd-journald[1142]: Received client request to flush runtime journal. Jul 9 23:49:17.179856 kernel: loop0: detected capacity change from 0 to 107312 Jul 9 23:49:17.148706 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 23:49:17.151993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 23:49:17.153653 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 23:49:17.156019 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 23:49:17.165644 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 23:49:17.185698 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 23:49:17.187626 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 23:49:17.191637 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 23:49:17.196639 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 23:49:17.197446 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 23:49:17.201031 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:49:17.236474 kernel: loop1: detected capacity change from 0 to 138376 Jul 9 23:49:17.239470 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 23:49:17.246638 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 23:49:17.252855 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 23:49:17.277481 kernel: loop2: detected capacity change from 0 to 211168 Jul 9 23:49:17.291759 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 9 23:49:17.291782 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 9 23:49:17.297696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 23:49:17.308521 kernel: loop3: detected capacity change from 0 to 107312 Jul 9 23:49:17.314474 kernel: loop4: detected capacity change from 0 to 138376 Jul 9 23:49:17.322457 kernel: loop5: detected capacity change from 0 to 211168 Jul 9 23:49:17.329102 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 23:49:17.329672 (sd-merge)[1221]: Merged extensions into '/usr'. Jul 9 23:49:17.333512 systemd[1]: Reload requested from client PID 1196 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 23:49:17.333671 systemd[1]: Reloading... Jul 9 23:49:17.391500 zram_generator::config[1247]: No configuration found. Jul 9 23:49:17.475704 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:49:17.525805 ldconfig[1191]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 23:49:17.538829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 23:49:17.538984 systemd[1]: Reloading finished in 204 ms. Jul 9 23:49:17.558230 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 23:49:17.561357 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 23:49:17.580785 systemd[1]: Starting ensure-sysext.service... Jul 9 23:49:17.582780 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 23:49:17.595182 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Jul 9 23:49:17.595198 systemd[1]: Reloading... Jul 9 23:49:17.605642 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 9 23:49:17.605675 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 9 23:49:17.605875 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 23:49:17.606903 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 23:49:17.607552 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 23:49:17.607758 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 9 23:49:17.607803 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Jul 9 23:49:17.610587 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:49:17.610598 systemd-tmpfiles[1284]: Skipping /boot Jul 9 23:49:17.620215 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 23:49:17.620235 systemd-tmpfiles[1284]: Skipping /boot Jul 9 23:49:17.649606 zram_generator::config[1311]: No configuration found. Jul 9 23:49:17.729656 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:49:17.793800 systemd[1]: Reloading finished in 198 ms. Jul 9 23:49:17.805222 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 23:49:17.806903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 23:49:17.824050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:49:17.827171 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 23:49:17.830649 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 23:49:17.835664 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 23:49:17.838825 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 23:49:17.842403 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 23:49:17.850222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:49:17.857485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:49:17.860855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:49:17.863784 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:49:17.865639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:49:17.865765 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:49:17.867774 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 23:49:17.872479 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 23:49:17.875072 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:49:17.875330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:49:17.879505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:49:17.881036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:49:17.882928 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:49:17.883101 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:49:17.888465 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:49:17.888646 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:49:17.891729 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 23:49:17.893406 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 23:49:17.896679 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jul 9 23:49:17.897955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:49:17.899699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 23:49:17.906463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 23:49:17.910677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 23:49:17.911941 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:49:17.912065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:49:17.921973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 23:49:17.922146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 23:49:17.923993 systemd[1]: Finished ensure-sysext.service. Jul 9 23:49:17.927125 augenrules[1385]: No rules Jul 9 23:49:17.925242 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 23:49:17.926848 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 23:49:17.927009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 23:49:17.928551 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:49:17.928733 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:49:17.933978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 23:49:17.939660 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 23:49:17.940756 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 23:49:17.940796 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 23:49:17.940849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 23:49:17.942691 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 23:49:17.943876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 23:49:17.944104 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 23:49:17.945600 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 23:49:17.948473 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 23:49:17.949779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 23:49:17.956689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 23:49:17.966783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 23:49:17.968328 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 23:49:17.976929 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 23:49:17.978524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 23:49:18.005806 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 23:49:18.059953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 23:49:18.062829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 23:49:18.095321 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 23:49:18.126549 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 23:49:18.127898 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 23:49:18.144158 systemd-networkd[1422]: lo: Link UP Jul 9 23:49:18.144170 systemd-networkd[1422]: lo: Gained carrier Jul 9 23:49:18.145261 systemd-networkd[1422]: Enumeration completed Jul 9 23:49:18.145551 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 23:49:18.145784 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:49:18.145796 systemd-networkd[1422]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 23:49:18.146370 systemd-networkd[1422]: eth0: Link UP Jul 9 23:49:18.146587 systemd-networkd[1422]: eth0: Gained carrier Jul 9 23:49:18.146602 systemd-networkd[1422]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 23:49:18.149237 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 23:49:18.157506 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 23:49:18.163489 systemd-networkd[1422]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 23:49:18.163878 systemd-resolved[1351]: Positive Trust Anchors: Jul 9 23:49:18.163895 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 23:49:18.163928 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 23:49:18.164092 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jul 9 23:49:18.164718 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 23:49:18.164767 systemd-timesyncd[1406]: Initial clock synchronization to Wed 2025-07-09 23:49:18.073862 UTC. Jul 9 23:49:18.180088 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 9 23:49:18.186700 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 23:49:18.188096 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 23:49:18.190405 systemd[1]: Reached target network.target - Network. Jul 9 23:49:18.191337 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 23:49:18.192567 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 23:49:18.194202 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 23:49:18.195895 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 23:49:18.197258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 23:49:18.198677 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 23:49:18.200292 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 23:49:18.202694 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 23:49:18.202729 systemd[1]: Reached target paths.target - Path Units. Jul 9 23:49:18.204149 systemd[1]: Reached target timers.target - Timer Units. Jul 9 23:49:18.206412 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 23:49:18.209087 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 23:49:18.212317 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 23:49:18.214480 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 23:49:18.215637 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 23:49:18.219794 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 23:49:18.221270 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 23:49:18.223138 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 23:49:18.228147 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 23:49:18.229217 systemd[1]: Reached target basic.target - Basic System. Jul 9 23:49:18.230237 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:49:18.230357 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 23:49:18.231690 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 23:49:18.233852 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 23:49:18.236775 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 23:49:18.238938 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 23:49:18.241031 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 23:49:18.242036 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 23:49:18.243237 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 23:49:18.245216 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 23:49:18.249607 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 23:49:18.251011 jq[1468]: false Jul 9 23:49:18.252528 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 23:49:18.256670 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 23:49:18.259016 extend-filesystems[1469]: Found /dev/vda6 Jul 9 23:49:18.259907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 23:49:18.262162 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 23:49:18.262740 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 23:49:18.264295 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 23:49:18.266901 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 23:49:18.273864 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 23:49:18.277594 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 23:49:18.277782 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 23:49:18.278379 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 23:49:18.278559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 23:49:18.278981 extend-filesystems[1469]: Found /dev/vda9 Jul 9 23:49:18.280976 jq[1487]: true Jul 9 23:49:18.281038 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 23:49:18.281208 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 23:49:18.282282 extend-filesystems[1469]: Checking size of /dev/vda9 Jul 9 23:49:18.303838 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 23:49:18.314579 jq[1495]: true Jul 9 23:49:18.330409 tar[1493]: linux-arm64/LICENSE Jul 9 23:49:18.331042 tar[1493]: linux-arm64/helm Jul 9 23:49:18.333070 extend-filesystems[1469]: Resized partition /dev/vda9 Jul 9 23:49:18.337942 extend-filesystems[1512]: resize2fs 1.47.2 (1-Jan-2025) Jul 9 23:49:18.342494 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 23:49:18.366680 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 23:49:18.370371 dbus-daemon[1466]: [system] SELinux support is enabled Jul 9 23:49:18.371772 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 23:49:18.376707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 23:49:18.378588 extend-filesystems[1512]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 23:49:18.378588 extend-filesystems[1512]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 23:49:18.378588 extend-filesystems[1512]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 23:49:18.385649 extend-filesystems[1469]: Resized filesystem in /dev/vda9 Jul 9 23:49:18.385057 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 23:49:18.385100 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 23:49:18.390802 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 23:49:18.390823 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 23:49:18.394830 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 23:49:18.395081 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 23:49:18.400115 update_engine[1485]: I20250709 23:49:18.399971 1485 main.cc:92] Flatcar Update Engine starting Jul 9 23:49:18.404983 update_engine[1485]: I20250709 23:49:18.404938 1485 update_check_scheduler.cc:74] Next update check in 3m26s Jul 9 23:49:18.414020 systemd[1]: Started update-engine.service - Update Engine. Jul 9 23:49:18.420489 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 23:49:18.424107 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jul 9 23:49:18.426473 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 23:49:18.429667 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 23:49:18.434464 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 23:49:18.434906 systemd-logind[1479]: New seat seat0. Jul 9 23:49:18.435848 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 23:49:18.533466 locksmithd[1535]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 23:49:18.605776 containerd[1496]: time="2025-07-09T23:49:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 9 23:49:18.607525 containerd[1496]: time="2025-07-09T23:49:18.607383960Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 9 23:49:18.616633 containerd[1496]: time="2025-07-09T23:49:18.616581440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.08µs" Jul 9 23:49:18.616633 containerd[1496]: time="2025-07-09T23:49:18.616620680Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 9 23:49:18.616730 containerd[1496]: time="2025-07-09T23:49:18.616639760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 9 23:49:18.616804 containerd[1496]: time="2025-07-09T23:49:18.616780200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 9 23:49:18.616827 containerd[1496]: time="2025-07-09T23:49:18.616805840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 9 23:49:18.616845 containerd[1496]: time="2025-07-09T23:49:18.616832840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:49:18.616904 containerd[1496]: time="2025-07-09T23:49:18.616886800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 9 23:49:18.616904 containerd[1496]: time="2025-07-09T23:49:18.616902720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617209 containerd[1496]: time="2025-07-09T23:49:18.617183520Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617245 containerd[1496]: time="2025-07-09T23:49:18.617208120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617245 containerd[1496]: time="2025-07-09T23:49:18.617225240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617245 containerd[1496]: time="2025-07-09T23:49:18.617237320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617356 containerd[1496]: time="2025-07-09T23:49:18.617338120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617575 containerd[1496]: time="2025-07-09T23:49:18.617550280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617625 containerd[1496]: time="2025-07-09T23:49:18.617591000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 9 23:49:18.617625 containerd[1496]: time="2025-07-09T23:49:18.617602040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 9 23:49:18.618215 containerd[1496]: time="2025-07-09T23:49:18.618182240Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 9 23:49:18.618829 containerd[1496]: time="2025-07-09T23:49:18.618706920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 9 23:49:18.618829 containerd[1496]: time="2025-07-09T23:49:18.618797640Z" level=info msg="metadata content store policy set" policy=shared Jul 9 23:49:18.775688 tar[1493]: linux-arm64/README.md Jul 9 23:49:18.798755 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 23:49:18.836160 containerd[1496]: time="2025-07-09T23:49:18.836072160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 9 23:49:18.836160 containerd[1496]: time="2025-07-09T23:49:18.836155320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 9 23:49:18.836160 containerd[1496]: time="2025-07-09T23:49:18.836171760Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836183840Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836198440Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836211600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836242640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836275200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836301400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836313480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836323560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 9 23:49:18.836395 containerd[1496]: time="2025-07-09T23:49:18.836339240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 9 23:49:18.836565 containerd[1496]: time="2025-07-09T23:49:18.836510880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 9 23:49:18.836565 containerd[1496]: time="2025-07-09T23:49:18.836533200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 9 23:49:18.836565 containerd[1496]: time="2025-07-09T23:49:18.836549080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 9 23:49:18.836565 containerd[1496]: time="2025-07-09T23:49:18.836559480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 9 23:49:18.836630 containerd[1496]: time="2025-07-09T23:49:18.836571680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 9 23:49:18.836630 containerd[1496]: time="2025-07-09T23:49:18.836582840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 9 23:49:18.836630 containerd[1496]: time="2025-07-09T23:49:18.836601240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 9 23:49:18.836630 containerd[1496]: time="2025-07-09T23:49:18.836611840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 9 23:49:18.836630 containerd[1496]: time="2025-07-09T23:49:18.836622320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 9 23:49:18.836713 containerd[1496]: time="2025-07-09T23:49:18.836632200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 9 23:49:18.836713 containerd[1496]: time="2025-07-09T23:49:18.836642720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 9 23:49:18.836874 containerd[1496]: time="2025-07-09T23:49:18.836832520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 9 23:49:18.836874 containerd[1496]: time="2025-07-09T23:49:18.836859000Z" level=info msg="Start snapshots syncer" Jul 9 23:49:18.836920 containerd[1496]: time="2025-07-09T23:49:18.836887920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 9 23:49:18.837228 containerd[1496]: time="2025-07-09T23:49:18.837130240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 9 23:49:18.837228 containerd[1496]: time="2025-07-09T23:49:18.837188000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 9 23:49:18.837447 containerd[1496]: time="2025-07-09T23:49:18.837256800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 9 23:49:18.837447 containerd[1496]: time="2025-07-09T23:49:18.837404000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 9 23:49:18.837447 containerd[1496]: time="2025-07-09T23:49:18.837427920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 9 23:49:18.837520 containerd[1496]: time="2025-07-09T23:49:18.837458680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 9 23:49:18.837520 containerd[1496]: time="2025-07-09T23:49:18.837472360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 9 23:49:18.837520 containerd[1496]: time="2025-07-09T23:49:18.837484120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 9 23:49:18.837520 containerd[1496]: time="2025-07-09T23:49:18.837504920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 9 23:49:18.837520 containerd[1496]: time="2025-07-09T23:49:18.837516680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 9 23:49:18.837605 containerd[1496]: time="2025-07-09T23:49:18.837543880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 9 23:49:18.837605 containerd[1496]: time="2025-07-09T23:49:18.837556520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 9 23:49:18.837605 containerd[1496]: time="2025-07-09T23:49:18.837568960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 9 23:49:18.837659 containerd[1496]: time="2025-07-09T23:49:18.837611920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:49:18.837659 containerd[1496]: time="2025-07-09T23:49:18.837627800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 9 23:49:18.837659 containerd[1496]: time="2025-07-09T23:49:18.837637760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:49:18.837710 containerd[1496]: time="2025-07-09T23:49:18.837666360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 9 23:49:18.837710 containerd[1496]: time="2025-07-09T23:49:18.837675240Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 9 23:49:18.837710 containerd[1496]: time="2025-07-09T23:49:18.837684960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 9 23:49:18.837710 containerd[1496]: time="2025-07-09T23:49:18.837695240Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 9 23:49:18.837795 containerd[1496]: time="2025-07-09T23:49:18.837773160Z" level=info msg="runtime interface created" Jul 9 23:49:18.837795 containerd[1496]: time="2025-07-09T23:49:18.837784120Z" level=info msg="created NRI interface" Jul 9 23:49:18.837829 containerd[1496]: time="2025-07-09T23:49:18.837797480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 9 23:49:18.837829 containerd[1496]: time="2025-07-09T23:49:18.837809440Z" level=info msg="Connect containerd service" Jul 9 23:49:18.837861 containerd[1496]: time="2025-07-09T23:49:18.837834800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 23:49:18.838712 containerd[1496]: time="2025-07-09T23:49:18.838658800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:49:18.949054 containerd[1496]: time="2025-07-09T23:49:18.948969720Z" level=info msg="Start subscribing containerd event" Jul 9 23:49:18.949054 containerd[1496]: time="2025-07-09T23:49:18.949060040Z" level=info msg="Start recovering state" Jul 9 23:49:18.949162 containerd[1496]: time="2025-07-09T23:49:18.949148880Z" level=info msg="Start event monitor" Jul 9 23:49:18.949181 containerd[1496]: time="2025-07-09T23:49:18.949163080Z" level=info msg="Start cni network conf syncer for default" Jul 9 23:49:18.949181 containerd[1496]: time="2025-07-09T23:49:18.949171680Z" level=info msg="Start streaming server" Jul 9 23:49:18.949213 containerd[1496]: time="2025-07-09T23:49:18.949181840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 9 23:49:18.949213 containerd[1496]: time="2025-07-09T23:49:18.949189200Z" level=info msg="runtime interface starting up..." Jul 9 23:49:18.949213 containerd[1496]: time="2025-07-09T23:49:18.949194880Z" level=info msg="starting plugins..." Jul 9 23:49:18.949213 containerd[1496]: time="2025-07-09T23:49:18.949206080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 9 23:49:18.949555 containerd[1496]: time="2025-07-09T23:49:18.949013120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 23:49:18.949676 containerd[1496]: time="2025-07-09T23:49:18.949604880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 23:49:18.949835 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 23:49:18.950997 containerd[1496]: time="2025-07-09T23:49:18.950962400Z" level=info msg="containerd successfully booted in 0.345606s" Jul 9 23:49:19.246564 systemd-networkd[1422]: eth0: Gained IPv6LL Jul 9 23:49:19.250479 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 23:49:19.252211 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 23:49:19.255912 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 23:49:19.258981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:19.269451 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 23:49:19.294409 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 23:49:19.294807 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 23:49:19.299196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 23:49:19.301896 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 23:49:19.625905 sshd_keygen[1491]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 23:49:19.655201 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 23:49:19.658765 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 23:49:19.677231 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 23:49:19.677471 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 23:49:19.681590 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 23:49:19.706675 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 23:49:19.710459 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 23:49:19.713293 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 23:49:19.714787 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 23:49:19.914658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:19.916354 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 23:49:19.917948 systemd[1]: Startup finished in 2.187s (kernel) + 5.735s (initrd) + 3.595s (userspace) = 11.519s. Jul 9 23:49:19.918799 (kubelet)[1605]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:49:20.376300 kubelet[1605]: E0709 23:49:20.376169 1605 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:49:20.378643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:49:20.378781 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:49:20.379090 systemd[1]: kubelet.service: Consumed 849ms CPU time, 259.7M memory peak. Jul 9 23:49:23.677923 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 23:49:23.679087 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:56972.service - OpenSSH per-connection server daemon (10.0.0.1:56972). Jul 9 23:49:23.761455 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 56972 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:23.764044 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:23.771739 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 23:49:23.772743 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 23:49:23.779497 systemd-logind[1479]: New session 1 of user core. Jul 9 23:49:23.802491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 23:49:23.805645 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 23:49:23.836740 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 23:49:23.838964 systemd-logind[1479]: New session c1 of user core. Jul 9 23:49:23.967861 systemd[1623]: Queued start job for default target default.target. Jul 9 23:49:23.988469 systemd[1623]: Created slice app.slice - User Application Slice. Jul 9 23:49:23.988499 systemd[1623]: Reached target paths.target - Paths. Jul 9 23:49:23.988622 systemd[1623]: Reached target timers.target - Timers. Jul 9 23:49:23.989972 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 23:49:23.999357 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 23:49:23.999442 systemd[1623]: Reached target sockets.target - Sockets. Jul 9 23:49:23.999569 systemd[1623]: Reached target basic.target - Basic System. Jul 9 23:49:23.999637 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 23:49:24.000247 systemd[1623]: Reached target default.target - Main User Target. Jul 9 23:49:24.000289 systemd[1623]: Startup finished in 155ms. Jul 9 23:49:24.000830 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 23:49:24.070873 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:56984.service - OpenSSH per-connection server daemon (10.0.0.1:56984). Jul 9 23:49:24.136975 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 56984 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:24.138259 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.142742 systemd-logind[1479]: New session 2 of user core. Jul 9 23:49:24.151668 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 23:49:24.204180 sshd[1636]: Connection closed by 10.0.0.1 port 56984 Jul 9 23:49:24.204734 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:24.217590 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:56984.service: Deactivated successfully. Jul 9 23:49:24.219186 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 23:49:24.221063 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Jul 9 23:49:24.223470 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:56998.service - OpenSSH per-connection server daemon (10.0.0.1:56998). Jul 9 23:49:24.224091 systemd-logind[1479]: Removed session 2. Jul 9 23:49:24.295751 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 56998 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:24.297126 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.301676 systemd-logind[1479]: New session 3 of user core. Jul 9 23:49:24.322640 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 23:49:24.370532 sshd[1644]: Connection closed by 10.0.0.1 port 56998 Jul 9 23:49:24.371177 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:24.382625 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:56998.service: Deactivated successfully. Jul 9 23:49:24.384214 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 23:49:24.385521 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Jul 9 23:49:24.387310 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:57012.service - OpenSSH per-connection server daemon (10.0.0.1:57012). Jul 9 23:49:24.388218 systemd-logind[1479]: Removed session 3. Jul 9 23:49:24.443048 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 57012 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:24.444561 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.448400 systemd-logind[1479]: New session 4 of user core. Jul 9 23:49:24.470633 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 23:49:24.522489 sshd[1652]: Connection closed by 10.0.0.1 port 57012 Jul 9 23:49:24.523359 sshd-session[1650]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:24.530665 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:57012.service: Deactivated successfully. Jul 9 23:49:24.532971 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 23:49:24.533807 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Jul 9 23:49:24.536345 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:57028.service - OpenSSH per-connection server daemon (10.0.0.1:57028). Jul 9 23:49:24.537341 systemd-logind[1479]: Removed session 4. Jul 9 23:49:24.590055 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 57028 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:24.591302 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.595488 systemd-logind[1479]: New session 5 of user core. Jul 9 23:49:24.611649 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 23:49:24.681874 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 23:49:24.682299 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:49:24.699127 sudo[1661]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:24.700749 sshd[1660]: Connection closed by 10.0.0.1 port 57028 Jul 9 23:49:24.701127 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:24.709932 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:57028.service: Deactivated successfully. Jul 9 23:49:24.711775 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 23:49:24.713255 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Jul 9 23:49:24.715364 systemd-logind[1479]: Removed session 5. Jul 9 23:49:24.717491 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:57042.service - OpenSSH per-connection server daemon (10.0.0.1:57042). Jul 9 23:49:24.781359 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 57042 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:24.782853 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:24.787494 systemd-logind[1479]: New session 6 of user core. Jul 9 23:49:24.800178 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 23:49:24.852173 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 23:49:24.852423 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:49:24.933865 sudo[1671]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:24.939985 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 23:49:24.940274 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:49:24.950974 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 23:49:25.008697 augenrules[1693]: No rules Jul 9 23:49:25.010195 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 23:49:25.010408 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 23:49:25.011499 sudo[1670]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:25.013611 sshd[1669]: Connection closed by 10.0.0.1 port 57042 Jul 9 23:49:25.014654 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:25.026534 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:57042.service: Deactivated successfully. Jul 9 23:49:25.029743 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 23:49:25.030740 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Jul 9 23:49:25.033714 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:57050.service - OpenSSH per-connection server daemon (10.0.0.1:57050). Jul 9 23:49:25.035414 systemd-logind[1479]: Removed session 6. Jul 9 23:49:25.094367 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 57050 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:49:25.096204 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:49:25.101051 systemd-logind[1479]: New session 7 of user core. Jul 9 23:49:25.110659 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 23:49:25.166154 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 23:49:25.166446 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 23:49:25.652807 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 23:49:25.671798 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 23:49:25.968248 dockerd[1726]: time="2025-07-09T23:49:25.967978838Z" level=info msg="Starting up" Jul 9 23:49:25.969446 dockerd[1726]: time="2025-07-09T23:49:25.969406637Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 9 23:49:26.019644 dockerd[1726]: time="2025-07-09T23:49:26.019599714Z" level=info msg="Loading containers: start." Jul 9 23:49:26.028454 kernel: Initializing XFRM netlink socket Jul 9 23:49:26.236636 systemd-networkd[1422]: docker0: Link UP Jul 9 23:49:26.239903 dockerd[1726]: time="2025-07-09T23:49:26.239857435Z" level=info msg="Loading containers: done." Jul 9 23:49:26.251304 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1009197201-merged.mount: Deactivated successfully. Jul 9 23:49:26.253509 dockerd[1726]: time="2025-07-09T23:49:26.253457624Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 23:49:26.253588 dockerd[1726]: time="2025-07-09T23:49:26.253541400Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 9 23:49:26.253669 dockerd[1726]: time="2025-07-09T23:49:26.253651043Z" level=info msg="Initializing buildkit" Jul 9 23:49:26.273156 dockerd[1726]: time="2025-07-09T23:49:26.273105973Z" level=info msg="Completed buildkit initialization" Jul 9 23:49:26.280153 dockerd[1726]: time="2025-07-09T23:49:26.280101339Z" level=info msg="Daemon has completed initialization" Jul 9 23:49:26.280315 dockerd[1726]: time="2025-07-09T23:49:26.280278019Z" level=info msg="API listen on /run/docker.sock" Jul 9 23:49:26.280405 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 23:49:26.765430 containerd[1496]: time="2025-07-09T23:49:26.765386046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 9 23:49:27.415185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228615532.mount: Deactivated successfully. Jul 9 23:49:28.254404 containerd[1496]: time="2025-07-09T23:49:28.252734496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:28.254404 containerd[1496]: time="2025-07-09T23:49:28.253288213Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 9 23:49:28.254404 containerd[1496]: time="2025-07-09T23:49:28.253861149Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:28.257049 containerd[1496]: time="2025-07-09T23:49:28.257012236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:28.258658 containerd[1496]: time="2025-07-09T23:49:28.258612740Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.493181414s" Jul 9 23:49:28.258719 containerd[1496]: time="2025-07-09T23:49:28.258658674Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 9 23:49:28.262019 containerd[1496]: time="2025-07-09T23:49:28.261984444Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 9 23:49:29.342483 containerd[1496]: time="2025-07-09T23:49:29.342407631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:29.344578 containerd[1496]: time="2025-07-09T23:49:29.344536876Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 9 23:49:29.345633 containerd[1496]: time="2025-07-09T23:49:29.345576773Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:29.348308 containerd[1496]: time="2025-07-09T23:49:29.348264392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:29.349815 containerd[1496]: time="2025-07-09T23:49:29.349767665Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.087743107s" Jul 9 23:49:29.349815 containerd[1496]: time="2025-07-09T23:49:29.349801764Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 9 23:49:29.350384 containerd[1496]: time="2025-07-09T23:49:29.350341194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 9 23:49:30.428158 containerd[1496]: time="2025-07-09T23:49:30.428106575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:30.428974 containerd[1496]: time="2025-07-09T23:49:30.428944192Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 9 23:49:30.429894 containerd[1496]: time="2025-07-09T23:49:30.429828598Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:30.432534 containerd[1496]: time="2025-07-09T23:49:30.432492267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:30.433576 containerd[1496]: time="2025-07-09T23:49:30.433525736Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.083121249s" Jul 9 23:49:30.433576 containerd[1496]: time="2025-07-09T23:49:30.433567978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 9 23:49:30.434147 containerd[1496]: time="2025-07-09T23:49:30.434103440Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 9 23:49:30.629156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 23:49:30.630682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:30.792877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:30.796859 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 23:49:30.837068 kubelet[2008]: E0709 23:49:30.837002 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 23:49:30.840072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 23:49:30.840207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 23:49:30.840822 systemd[1]: kubelet.service: Consumed 146ms CPU time, 105.6M memory peak. Jul 9 23:49:31.455574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408122132.mount: Deactivated successfully. Jul 9 23:49:31.923667 containerd[1496]: time="2025-07-09T23:49:31.923507744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:31.924837 containerd[1496]: time="2025-07-09T23:49:31.924635946Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 9 23:49:31.925534 containerd[1496]: time="2025-07-09T23:49:31.925506304Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:31.928290 containerd[1496]: time="2025-07-09T23:49:31.927816087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:31.928458 containerd[1496]: time="2025-07-09T23:49:31.928406300Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.4942593s" Jul 9 23:49:31.928539 containerd[1496]: time="2025-07-09T23:49:31.928523153Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 9 23:49:31.929025 containerd[1496]: time="2025-07-09T23:49:31.928995315Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 9 23:49:32.493544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866164780.mount: Deactivated successfully. Jul 9 23:49:33.153372 containerd[1496]: time="2025-07-09T23:49:33.153309687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:33.154205 containerd[1496]: time="2025-07-09T23:49:33.154156496Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 9 23:49:33.155029 containerd[1496]: time="2025-07-09T23:49:33.154990575Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:33.158678 containerd[1496]: time="2025-07-09T23:49:33.158608998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:33.159485 containerd[1496]: time="2025-07-09T23:49:33.159338877Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.230308411s" Jul 9 23:49:33.159485 containerd[1496]: time="2025-07-09T23:49:33.159375433Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 9 23:49:33.159918 containerd[1496]: time="2025-07-09T23:49:33.159878114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 23:49:33.670395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398246164.mount: Deactivated successfully. Jul 9 23:49:33.676738 containerd[1496]: time="2025-07-09T23:49:33.676688431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:49:33.677660 containerd[1496]: time="2025-07-09T23:49:33.677472226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 23:49:33.678371 containerd[1496]: time="2025-07-09T23:49:33.678334399Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:49:33.680539 containerd[1496]: time="2025-07-09T23:49:33.680476983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 23:49:33.681478 containerd[1496]: time="2025-07-09T23:49:33.681170705Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 521.263737ms" Jul 9 23:49:33.681478 containerd[1496]: time="2025-07-09T23:49:33.681201913Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 23:49:33.681737 containerd[1496]: time="2025-07-09T23:49:33.681713894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 9 23:49:34.113076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364213399.mount: Deactivated successfully. Jul 9 23:49:35.478931 containerd[1496]: time="2025-07-09T23:49:35.478881650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:35.479367 containerd[1496]: time="2025-07-09T23:49:35.479309385Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 9 23:49:35.480384 containerd[1496]: time="2025-07-09T23:49:35.480331875Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:35.483392 containerd[1496]: time="2025-07-09T23:49:35.483345893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:49:35.484667 containerd[1496]: time="2025-07-09T23:49:35.484586422Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.802837926s" Jul 9 23:49:35.484667 containerd[1496]: time="2025-07-09T23:49:35.484631411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 9 23:49:40.587071 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:40.587672 systemd[1]: kubelet.service: Consumed 146ms CPU time, 105.6M memory peak. Jul 9 23:49:40.589724 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:40.610517 systemd[1]: Reload requested from client PID 2165 ('systemctl') (unit session-7.scope)... Jul 9 23:49:40.610534 systemd[1]: Reloading... Jul 9 23:49:40.678461 zram_generator::config[2207]: No configuration found. Jul 9 23:49:40.750263 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:49:40.835750 systemd[1]: Reloading finished in 224 ms. Jul 9 23:49:40.886040 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 23:49:40.886406 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 23:49:40.887523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:40.889828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:41.039617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:41.053786 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:49:41.093476 kubelet[2251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:41.093476 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:49:41.093476 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:41.093476 kubelet[2251]: I0709 23:49:41.092809 2251 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:49:41.810273 kubelet[2251]: I0709 23:49:41.810228 2251 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 23:49:41.810273 kubelet[2251]: I0709 23:49:41.810259 2251 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:49:41.810519 kubelet[2251]: I0709 23:49:41.810498 2251 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 23:49:41.850601 kubelet[2251]: E0709 23:49:41.850552 2251 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 9 23:49:41.852575 kubelet[2251]: I0709 23:49:41.852538 2251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:49:41.866465 kubelet[2251]: I0709 23:49:41.866246 2251 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:49:41.869042 kubelet[2251]: I0709 23:49:41.869014 2251 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:49:41.869492 kubelet[2251]: I0709 23:49:41.869461 2251 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:49:41.869720 kubelet[2251]: I0709 23:49:41.869565 2251 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:49:41.869927 kubelet[2251]: I0709 23:49:41.869912 2251 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:49:41.869981 kubelet[2251]: I0709 23:49:41.869973 2251 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 23:49:41.870831 kubelet[2251]: I0709 23:49:41.870807 2251 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:41.875038 kubelet[2251]: I0709 23:49:41.875010 2251 kubelet.go:480] "Attempting to sync node with API server" Jul 9 23:49:41.875188 kubelet[2251]: I0709 23:49:41.875143 2251 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:49:41.875188 kubelet[2251]: I0709 23:49:41.875174 2251 kubelet.go:386] "Adding apiserver pod source" Jul 9 23:49:41.876373 kubelet[2251]: I0709 23:49:41.876315 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:49:41.879212 kubelet[2251]: E0709 23:49:41.879149 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 23:49:41.879212 kubelet[2251]: E0709 23:49:41.879185 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 23:49:41.879688 kubelet[2251]: I0709 23:49:41.879653 2251 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:49:41.880443 kubelet[2251]: I0709 23:49:41.880404 2251 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 23:49:41.883938 kubelet[2251]: W0709 23:49:41.883913 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 23:49:41.886486 kubelet[2251]: I0709 23:49:41.886459 2251 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:49:41.886555 kubelet[2251]: I0709 23:49:41.886510 2251 server.go:1289] "Started kubelet" Jul 9 23:49:41.888146 kubelet[2251]: I0709 23:49:41.888107 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:49:41.889529 kubelet[2251]: I0709 23:49:41.888241 2251 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:49:41.889529 kubelet[2251]: I0709 23:49:41.888578 2251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:49:41.889529 kubelet[2251]: I0709 23:49:41.888906 2251 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:49:41.889529 kubelet[2251]: I0709 23:49:41.889167 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:49:41.889529 kubelet[2251]: I0709 23:49:41.889274 2251 server.go:317] "Adding debug handlers to kubelet server" Jul 9 23:49:41.890681 kubelet[2251]: E0709 23:49:41.890642 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:49:41.890681 kubelet[2251]: I0709 23:49:41.890682 2251 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:49:41.890924 kubelet[2251]: I0709 23:49:41.890898 2251 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:49:41.891235 kubelet[2251]: I0709 23:49:41.890994 2251 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:49:41.891485 kubelet[2251]: E0709 23:49:41.891456 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 23:49:41.896479 kubelet[2251]: I0709 23:49:41.892224 2251 factory.go:223] Registration of the systemd container factory successfully Jul 9 23:49:41.896479 kubelet[2251]: I0709 23:49:41.892332 2251 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:49:41.898387 kubelet[2251]: E0709 23:49:41.893774 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jul 9 23:49:41.898963 kubelet[2251]: E0709 23:49:41.889026 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850ba32ae89270b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 23:49:41.886478091 +0000 UTC m=+0.826056322,LastTimestamp:2025-07-09 23:49:41.886478091 +0000 UTC m=+0.826056322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 23:49:41.899178 kubelet[2251]: E0709 23:49:41.899152 2251 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 23:49:41.899783 kubelet[2251]: I0709 23:49:41.899761 2251 factory.go:223] Registration of the containerd container factory successfully Jul 9 23:49:41.912360 kubelet[2251]: I0709 23:49:41.912331 2251 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:49:41.912854 kubelet[2251]: I0709 23:49:41.912520 2251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:49:41.912854 kubelet[2251]: I0709 23:49:41.912543 2251 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:41.916548 kubelet[2251]: I0709 23:49:41.916493 2251 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 23:49:41.917635 kubelet[2251]: I0709 23:49:41.917595 2251 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 23:49:41.917635 kubelet[2251]: I0709 23:49:41.917630 2251 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 23:49:41.917730 kubelet[2251]: I0709 23:49:41.917661 2251 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:49:41.917730 kubelet[2251]: I0709 23:49:41.917672 2251 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 23:49:41.917730 kubelet[2251]: E0709 23:49:41.917723 2251 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:49:41.991381 kubelet[2251]: E0709 23:49:41.991330 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:49:41.996282 kubelet[2251]: I0709 23:49:41.996237 2251 policy_none.go:49] "None policy: Start" Jul 9 23:49:41.996282 kubelet[2251]: I0709 23:49:41.996276 2251 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:49:41.996282 kubelet[2251]: I0709 23:49:41.996291 2251 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:49:41.997289 kubelet[2251]: E0709 23:49:41.997247 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 23:49:42.006866 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 23:49:42.018524 kubelet[2251]: E0709 23:49:42.018478 2251 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 23:49:42.024764 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 23:49:42.027697 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 23:49:42.052403 kubelet[2251]: E0709 23:49:42.052370 2251 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 23:49:42.053126 kubelet[2251]: I0709 23:49:42.053110 2251 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:49:42.053188 kubelet[2251]: I0709 23:49:42.053135 2251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:49:42.053405 kubelet[2251]: I0709 23:49:42.053389 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:49:42.055273 kubelet[2251]: E0709 23:49:42.055244 2251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:49:42.055399 kubelet[2251]: E0709 23:49:42.055378 2251 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 23:49:42.099162 kubelet[2251]: E0709 23:49:42.099033 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jul 9 23:49:42.155497 kubelet[2251]: I0709 23:49:42.155323 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:42.155911 kubelet[2251]: E0709 23:49:42.155882 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 9 23:49:42.234013 systemd[1]: Created slice kubepods-burstable-pod7ca16c59fd6782bde876667152d8aeb1.slice - libcontainer container kubepods-burstable-pod7ca16c59fd6782bde876667152d8aeb1.slice. Jul 9 23:49:42.258466 kubelet[2251]: E0709 23:49:42.258401 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:42.261898 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 9 23:49:42.272742 kubelet[2251]: E0709 23:49:42.272541 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:42.275039 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 9 23:49:42.276879 kubelet[2251]: E0709 23:49:42.276709 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:42.292898 kubelet[2251]: I0709 23:49:42.292869 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ca16c59fd6782bde876667152d8aeb1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ca16c59fd6782bde876667152d8aeb1\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:42.293028 kubelet[2251]: I0709 23:49:42.293011 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ca16c59fd6782bde876667152d8aeb1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7ca16c59fd6782bde876667152d8aeb1\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:42.293122 kubelet[2251]: I0709 23:49:42.293110 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:42.293267 kubelet[2251]: I0709 23:49:42.293170 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:42.293267 kubelet[2251]: I0709 23:49:42.293192 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:42.293267 kubelet[2251]: I0709 23:49:42.293207 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:42.293461 kubelet[2251]: I0709 23:49:42.293387 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ca16c59fd6782bde876667152d8aeb1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ca16c59fd6782bde876667152d8aeb1\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:42.293461 kubelet[2251]: I0709 23:49:42.293414 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:42.293545 kubelet[2251]: I0709 23:49:42.293430 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:42.357420 kubelet[2251]: I0709 23:49:42.357311 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:42.358317 kubelet[2251]: E0709 23:49:42.358183 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 9 23:49:42.499689 kubelet[2251]: E0709 23:49:42.499645 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jul 9 23:49:42.560048 kubelet[2251]: E0709 23:49:42.559948 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.560629 containerd[1496]: time="2025-07-09T23:49:42.560592959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7ca16c59fd6782bde876667152d8aeb1,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:42.573962 kubelet[2251]: E0709 23:49:42.573867 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.574338 containerd[1496]: time="2025-07-09T23:49:42.574300990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:42.578009 kubelet[2251]: E0709 23:49:42.577874 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.578725 containerd[1496]: time="2025-07-09T23:49:42.578675398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:42.583769 containerd[1496]: time="2025-07-09T23:49:42.583681752Z" level=info msg="connecting to shim a51c6e7779f4875dc71e38c5c6e091c2e7e58ce40b1258aef7cddd0660551c8e" address="unix:///run/containerd/s/dbc350fd98b652c68fab2c782afa201636ff3b77de3680b8125136c877995db4" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:42.613714 systemd[1]: Started cri-containerd-a51c6e7779f4875dc71e38c5c6e091c2e7e58ce40b1258aef7cddd0660551c8e.scope - libcontainer container a51c6e7779f4875dc71e38c5c6e091c2e7e58ce40b1258aef7cddd0660551c8e. Jul 9 23:49:42.617389 containerd[1496]: time="2025-07-09T23:49:42.617322999Z" level=info msg="connecting to shim 4fe8aa5a30889eaf5eddd5b56867dfd94e4d197969ce3159d0e79a7dab0c2bfc" address="unix:///run/containerd/s/72edd0126d14db89ce0c0ad40cd0ba1db7bcfc21fd892691dd8bc3329cbac23e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:42.618293 containerd[1496]: time="2025-07-09T23:49:42.617741220Z" level=info msg="connecting to shim a0c846e85f231ac765152799150490f0a5033100c310eeeb07c11e5460c1c00f" address="unix:///run/containerd/s/aabdefe225b67dba816cc0e9c9adcdb70d4ad87b2c910c00eefdada23961c8cf" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:42.645720 systemd[1]: Started cri-containerd-4fe8aa5a30889eaf5eddd5b56867dfd94e4d197969ce3159d0e79a7dab0c2bfc.scope - libcontainer container 4fe8aa5a30889eaf5eddd5b56867dfd94e4d197969ce3159d0e79a7dab0c2bfc. Jul 9 23:49:42.649376 systemd[1]: Started cri-containerd-a0c846e85f231ac765152799150490f0a5033100c310eeeb07c11e5460c1c00f.scope - libcontainer container a0c846e85f231ac765152799150490f0a5033100c310eeeb07c11e5460c1c00f. Jul 9 23:49:42.659830 containerd[1496]: time="2025-07-09T23:49:42.659754088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7ca16c59fd6782bde876667152d8aeb1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a51c6e7779f4875dc71e38c5c6e091c2e7e58ce40b1258aef7cddd0660551c8e\"" Jul 9 23:49:42.661629 kubelet[2251]: E0709 23:49:42.661605 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.665847 containerd[1496]: time="2025-07-09T23:49:42.665798106Z" level=info msg="CreateContainer within sandbox \"a51c6e7779f4875dc71e38c5c6e091c2e7e58ce40b1258aef7cddd0660551c8e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 23:49:42.678454 containerd[1496]: time="2025-07-09T23:49:42.678130388Z" level=info msg="Container 30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:42.695343 containerd[1496]: time="2025-07-09T23:49:42.695298524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0c846e85f231ac765152799150490f0a5033100c310eeeb07c11e5460c1c00f\"" Jul 9 23:49:42.696370 kubelet[2251]: E0709 23:49:42.696340 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.696678 containerd[1496]: time="2025-07-09T23:49:42.696647707Z" level=info msg="CreateContainer within sandbox \"a51c6e7779f4875dc71e38c5c6e091c2e7e58ce40b1258aef7cddd0660551c8e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4\"" Jul 9 23:49:42.697202 containerd[1496]: time="2025-07-09T23:49:42.697175627Z" level=info msg="StartContainer for \"30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4\"" Jul 9 23:49:42.698716 containerd[1496]: time="2025-07-09T23:49:42.698676016Z" level=info msg="connecting to shim 30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4" address="unix:///run/containerd/s/dbc350fd98b652c68fab2c782afa201636ff3b77de3680b8125136c877995db4" protocol=ttrpc version=3 Jul 9 23:49:42.703404 containerd[1496]: time="2025-07-09T23:49:42.703271978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fe8aa5a30889eaf5eddd5b56867dfd94e4d197969ce3159d0e79a7dab0c2bfc\"" Jul 9 23:49:42.703707 containerd[1496]: time="2025-07-09T23:49:42.703599477Z" level=info msg="CreateContainer within sandbox \"a0c846e85f231ac765152799150490f0a5033100c310eeeb07c11e5460c1c00f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 23:49:42.703971 kubelet[2251]: E0709 23:49:42.703942 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.710019 containerd[1496]: time="2025-07-09T23:49:42.709985494Z" level=info msg="CreateContainer within sandbox \"4fe8aa5a30889eaf5eddd5b56867dfd94e4d197969ce3159d0e79a7dab0c2bfc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 23:49:42.720079 containerd[1496]: time="2025-07-09T23:49:42.720042386Z" level=info msg="Container 7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:42.722615 systemd[1]: Started cri-containerd-30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4.scope - libcontainer container 30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4. Jul 9 23:49:42.734909 containerd[1496]: time="2025-07-09T23:49:42.734856393Z" level=info msg="CreateContainer within sandbox \"a0c846e85f231ac765152799150490f0a5033100c310eeeb07c11e5460c1c00f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04\"" Jul 9 23:49:42.735372 containerd[1496]: time="2025-07-09T23:49:42.735351715Z" level=info msg="StartContainer for \"7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04\"" Jul 9 23:49:42.737248 containerd[1496]: time="2025-07-09T23:49:42.736976743Z" level=info msg="connecting to shim 7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04" address="unix:///run/containerd/s/aabdefe225b67dba816cc0e9c9adcdb70d4ad87b2c910c00eefdada23961c8cf" protocol=ttrpc version=3 Jul 9 23:49:42.738090 containerd[1496]: time="2025-07-09T23:49:42.738062425Z" level=info msg="Container 7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:42.747693 containerd[1496]: time="2025-07-09T23:49:42.747650640Z" level=info msg="CreateContainer within sandbox \"4fe8aa5a30889eaf5eddd5b56867dfd94e4d197969ce3159d0e79a7dab0c2bfc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090\"" Jul 9 23:49:42.748536 containerd[1496]: time="2025-07-09T23:49:42.748365839Z" level=info msg="StartContainer for \"7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090\"" Jul 9 23:49:42.749635 containerd[1496]: time="2025-07-09T23:49:42.749608719Z" level=info msg="connecting to shim 7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090" address="unix:///run/containerd/s/72edd0126d14db89ce0c0ad40cd0ba1db7bcfc21fd892691dd8bc3329cbac23e" protocol=ttrpc version=3 Jul 9 23:49:42.759688 systemd[1]: Started cri-containerd-7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04.scope - libcontainer container 7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04. Jul 9 23:49:42.761081 kubelet[2251]: I0709 23:49:42.761019 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:42.763383 kubelet[2251]: E0709 23:49:42.763293 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 9 23:49:42.776989 containerd[1496]: time="2025-07-09T23:49:42.775129021Z" level=info msg="StartContainer for \"30153c45ee401d2a2745a3b2ae4e2d95dc685d6a167373b84c703242291333f4\" returns successfully" Jul 9 23:49:42.781655 systemd[1]: Started cri-containerd-7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090.scope - libcontainer container 7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090. Jul 9 23:49:42.838393 containerd[1496]: time="2025-07-09T23:49:42.838249873Z" level=info msg="StartContainer for \"7cf3fbce617be5480b400986a900c828b9345d50c6d7159a4da58728782a3c04\" returns successfully" Jul 9 23:49:42.868500 containerd[1496]: time="2025-07-09T23:49:42.868386192Z" level=info msg="StartContainer for \"7d5159c85e3583878f30f52f6164ed14480357efcc8ee67c9caaa315c17a2090\" returns successfully" Jul 9 23:49:42.936062 kubelet[2251]: E0709 23:49:42.936032 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:42.936549 kubelet[2251]: E0709 23:49:42.936487 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.937867 kubelet[2251]: E0709 23:49:42.937847 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:42.937981 kubelet[2251]: E0709 23:49:42.937965 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:42.941461 kubelet[2251]: E0709 23:49:42.941401 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:42.941764 kubelet[2251]: E0709 23:49:42.941737 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:43.045909 kubelet[2251]: E0709 23:49:43.045863 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 23:49:43.565739 kubelet[2251]: I0709 23:49:43.565707 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:43.944129 kubelet[2251]: E0709 23:49:43.943921 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:43.944845 kubelet[2251]: E0709 23:49:43.944820 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:43.945039 kubelet[2251]: E0709 23:49:43.945007 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:43.945152 kubelet[2251]: E0709 23:49:43.945137 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:44.948883 kubelet[2251]: E0709 23:49:44.948848 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 23:49:44.952446 kubelet[2251]: E0709 23:49:44.949536 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:44.995181 kubelet[2251]: E0709 23:49:44.995139 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 23:49:45.035512 kubelet[2251]: I0709 23:49:45.035478 2251 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:49:45.093597 kubelet[2251]: I0709 23:49:45.093568 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:45.103575 kubelet[2251]: E0709 23:49:45.103542 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:45.103744 kubelet[2251]: I0709 23:49:45.103732 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:45.109032 kubelet[2251]: E0709 23:49:45.108992 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:45.109224 kubelet[2251]: I0709 23:49:45.109212 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:45.113670 kubelet[2251]: E0709 23:49:45.113636 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:45.430788 kubelet[2251]: I0709 23:49:45.430281 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:45.433306 kubelet[2251]: E0709 23:49:45.433208 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:45.433765 kubelet[2251]: E0709 23:49:45.433678 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:45.878217 kubelet[2251]: I0709 23:49:45.877988 2251 apiserver.go:52] "Watching apiserver" Jul 9 23:49:45.891533 kubelet[2251]: I0709 23:49:45.891487 2251 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:49:46.292274 kubelet[2251]: I0709 23:49:46.292140 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:46.297594 kubelet[2251]: E0709 23:49:46.297561 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:46.949469 kubelet[2251]: E0709 23:49:46.949417 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:46.999590 systemd[1]: Reload requested from client PID 2542 ('systemctl') (unit session-7.scope)... Jul 9 23:49:46.999606 systemd[1]: Reloading... Jul 9 23:49:47.072465 zram_generator::config[2585]: No configuration found. Jul 9 23:49:47.228569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 23:49:47.352270 systemd[1]: Reloading finished in 352 ms. Jul 9 23:49:47.382655 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:47.395430 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 23:49:47.395710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:47.395776 systemd[1]: kubelet.service: Consumed 1.311s CPU time, 128.5M memory peak. Jul 9 23:49:47.397720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 23:49:47.555296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 23:49:47.560851 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 23:49:47.629610 kubelet[2627]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:47.629610 kubelet[2627]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 23:49:47.629610 kubelet[2627]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 23:49:47.630091 kubelet[2627]: I0709 23:49:47.629635 2627 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 23:49:47.637903 kubelet[2627]: I0709 23:49:47.637779 2627 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 23:49:47.637903 kubelet[2627]: I0709 23:49:47.637814 2627 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 23:49:47.638397 kubelet[2627]: I0709 23:49:47.638376 2627 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 23:49:47.640297 kubelet[2627]: I0709 23:49:47.640260 2627 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 9 23:49:47.643554 kubelet[2627]: I0709 23:49:47.643501 2627 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 23:49:47.649588 kubelet[2627]: I0709 23:49:47.649560 2627 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 9 23:49:47.656289 kubelet[2627]: I0709 23:49:47.656241 2627 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 23:49:47.656801 kubelet[2627]: I0709 23:49:47.656687 2627 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 23:49:47.657018 kubelet[2627]: I0709 23:49:47.656803 2627 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 23:49:47.657091 kubelet[2627]: I0709 23:49:47.657028 2627 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 23:49:47.657091 kubelet[2627]: I0709 23:49:47.657037 2627 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 23:49:47.657163 kubelet[2627]: I0709 23:49:47.657113 2627 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:47.657332 kubelet[2627]: I0709 23:49:47.657317 2627 kubelet.go:480] "Attempting to sync node with API server" Jul 9 23:49:47.657358 kubelet[2627]: I0709 23:49:47.657334 2627 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 23:49:47.657946 kubelet[2627]: I0709 23:49:47.657868 2627 kubelet.go:386] "Adding apiserver pod source" Jul 9 23:49:47.657946 kubelet[2627]: I0709 23:49:47.657893 2627 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 23:49:47.661655 kubelet[2627]: I0709 23:49:47.661634 2627 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 9 23:49:47.662621 kubelet[2627]: I0709 23:49:47.662602 2627 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 23:49:47.666033 kubelet[2627]: I0709 23:49:47.666015 2627 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 23:49:47.668783 kubelet[2627]: I0709 23:49:47.668753 2627 server.go:1289] "Started kubelet" Jul 9 23:49:47.671838 kubelet[2627]: I0709 23:49:47.671806 2627 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 23:49:47.672203 kubelet[2627]: I0709 23:49:47.672181 2627 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 23:49:47.673588 kubelet[2627]: I0709 23:49:47.673565 2627 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 23:49:47.673718 kubelet[2627]: E0709 23:49:47.673700 2627 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 23:49:47.676538 kubelet[2627]: I0709 23:49:47.676516 2627 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 23:49:47.676657 kubelet[2627]: I0709 23:49:47.676642 2627 factory.go:223] Registration of the systemd container factory successfully Jul 9 23:49:47.676879 kubelet[2627]: I0709 23:49:47.676855 2627 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 23:49:47.677061 kubelet[2627]: I0709 23:49:47.677039 2627 reconciler.go:26] "Reconciler: start to sync state" Jul 9 23:49:47.685699 kubelet[2627]: I0709 23:49:47.685618 2627 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 23:49:47.686127 kubelet[2627]: I0709 23:49:47.686056 2627 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 23:49:47.686317 kubelet[2627]: I0709 23:49:47.686292 2627 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 23:49:47.687479 kubelet[2627]: I0709 23:49:47.687455 2627 server.go:317] "Adding debug handlers to kubelet server" Jul 9 23:49:47.691724 kubelet[2627]: I0709 23:49:47.691693 2627 factory.go:223] Registration of the containerd container factory successfully Jul 9 23:49:47.713764 kubelet[2627]: I0709 23:49:47.713563 2627 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 23:49:47.716376 kubelet[2627]: I0709 23:49:47.716329 2627 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 23:49:47.716563 kubelet[2627]: I0709 23:49:47.716551 2627 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 23:49:47.716679 kubelet[2627]: I0709 23:49:47.716666 2627 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 23:49:47.716729 kubelet[2627]: I0709 23:49:47.716720 2627 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 23:49:47.716848 kubelet[2627]: E0709 23:49:47.716824 2627 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 23:49:47.740067 kubelet[2627]: I0709 23:49:47.740040 2627 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 23:49:47.740198 kubelet[2627]: I0709 23:49:47.740183 2627 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 23:49:47.740285 kubelet[2627]: I0709 23:49:47.740277 2627 state_mem.go:36] "Initialized new in-memory state store" Jul 9 23:49:47.740537 kubelet[2627]: I0709 23:49:47.740519 2627 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 23:49:47.740631 kubelet[2627]: I0709 23:49:47.740608 2627 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 23:49:47.740686 kubelet[2627]: I0709 23:49:47.740678 2627 policy_none.go:49] "None policy: Start" Jul 9 23:49:47.740751 kubelet[2627]: I0709 23:49:47.740741 2627 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 23:49:47.740811 kubelet[2627]: I0709 23:49:47.740802 2627 state_mem.go:35] "Initializing new in-memory state store" Jul 9 23:49:47.740954 kubelet[2627]: I0709 23:49:47.740942 2627 state_mem.go:75] "Updated machine memory state" Jul 9 23:49:47.746672 kubelet[2627]: E0709 23:49:47.745920 2627 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 23:49:47.746672 kubelet[2627]: I0709 23:49:47.746119 2627 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 23:49:47.746672 kubelet[2627]: I0709 23:49:47.746131 2627 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 23:49:47.746672 kubelet[2627]: I0709 23:49:47.746389 2627 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 23:49:47.749675 kubelet[2627]: E0709 23:49:47.749624 2627 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 23:49:47.818174 kubelet[2627]: I0709 23:49:47.818035 2627 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:47.818174 kubelet[2627]: I0709 23:49:47.818080 2627 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:47.818174 kubelet[2627]: I0709 23:49:47.818128 2627 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:47.839411 kubelet[2627]: E0709 23:49:47.839359 2627 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:47.852409 kubelet[2627]: I0709 23:49:47.852378 2627 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 23:49:47.863065 kubelet[2627]: I0709 23:49:47.862746 2627 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 23:49:47.863065 kubelet[2627]: I0709 23:49:47.862835 2627 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 23:49:47.878030 kubelet[2627]: I0709 23:49:47.877978 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:47.878030 kubelet[2627]: I0709 23:49:47.878033 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:47.878237 kubelet[2627]: I0709 23:49:47.878052 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:47.878237 kubelet[2627]: I0709 23:49:47.878085 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:47.878237 kubelet[2627]: I0709 23:49:47.878112 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ca16c59fd6782bde876667152d8aeb1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ca16c59fd6782bde876667152d8aeb1\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:47.878326 kubelet[2627]: I0709 23:49:47.878310 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ca16c59fd6782bde876667152d8aeb1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ca16c59fd6782bde876667152d8aeb1\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:47.878353 kubelet[2627]: I0709 23:49:47.878336 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ca16c59fd6782bde876667152d8aeb1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7ca16c59fd6782bde876667152d8aeb1\") " pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:47.878373 kubelet[2627]: I0709 23:49:47.878352 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:47.878373 kubelet[2627]: I0709 23:49:47.878367 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 23:49:48.009258 sudo[2667]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 23:49:48.009564 sudo[2667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 23:49:48.140643 kubelet[2627]: E0709 23:49:48.140515 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:48.140767 kubelet[2627]: E0709 23:49:48.140650 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:48.140911 kubelet[2627]: E0709 23:49:48.140796 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:48.468763 sudo[2667]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:48.661154 kubelet[2627]: I0709 23:49:48.660887 2627 apiserver.go:52] "Watching apiserver" Jul 9 23:49:48.677723 kubelet[2627]: I0709 23:49:48.677662 2627 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 23:49:48.731575 kubelet[2627]: I0709 23:49:48.731473 2627 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:48.731717 kubelet[2627]: E0709 23:49:48.731686 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:48.731717 kubelet[2627]: I0709 23:49:48.731708 2627 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:48.737806 kubelet[2627]: E0709 23:49:48.737769 2627 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 23:49:48.737944 kubelet[2627]: E0709 23:49:48.737928 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:48.747325 kubelet[2627]: E0709 23:49:48.747274 2627 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 23:49:48.747518 kubelet[2627]: E0709 23:49:48.747497 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:48.763586 kubelet[2627]: I0709 23:49:48.763516 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.763497522 podStartE2EDuration="1.763497522s" podCreationTimestamp="2025-07-09 23:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:48.755865831 +0000 UTC m=+1.185333882" watchObservedRunningTime="2025-07-09 23:49:48.763497522 +0000 UTC m=+1.192965613" Jul 9 23:49:48.777523 kubelet[2627]: I0709 23:49:48.773355 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.773338444 podStartE2EDuration="2.773338444s" podCreationTimestamp="2025-07-09 23:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:48.763873114 +0000 UTC m=+1.193341205" watchObservedRunningTime="2025-07-09 23:49:48.773338444 +0000 UTC m=+1.202806535" Jul 9 23:49:48.777523 kubelet[2627]: I0709 23:49:48.773532 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7735268789999998 podStartE2EDuration="1.773526879s" podCreationTimestamp="2025-07-09 23:49:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:48.773045699 +0000 UTC m=+1.202513830" watchObservedRunningTime="2025-07-09 23:49:48.773526879 +0000 UTC m=+1.202994970" Jul 9 23:49:49.733373 kubelet[2627]: E0709 23:49:49.733178 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:49.733373 kubelet[2627]: E0709 23:49:49.733297 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:50.405305 sudo[1705]: pam_unix(sudo:session): session closed for user root Jul 9 23:49:50.407893 sshd[1704]: Connection closed by 10.0.0.1 port 57050 Jul 9 23:49:50.408870 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jul 9 23:49:50.412993 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:57050.service: Deactivated successfully. Jul 9 23:49:50.414865 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 23:49:50.415680 systemd[1]: session-7.scope: Consumed 7.972s CPU time, 265.3M memory peak. Jul 9 23:49:50.416915 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Jul 9 23:49:50.418389 systemd-logind[1479]: Removed session 7. Jul 9 23:49:52.097892 kubelet[2627]: E0709 23:49:52.097849 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:52.740810 kubelet[2627]: E0709 23:49:52.740095 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:53.435025 kubelet[2627]: I0709 23:49:53.434982 2627 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 23:49:53.435452 containerd[1496]: time="2025-07-09T23:49:53.435329273Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 23:49:53.436377 kubelet[2627]: I0709 23:49:53.435551 2627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 23:49:54.291109 systemd[1]: Created slice kubepods-besteffort-pode4fa8810_0181_4573_8ec7_24866dda5ac4.slice - libcontainer container kubepods-besteffort-pode4fa8810_0181_4573_8ec7_24866dda5ac4.slice. Jul 9 23:49:54.309945 systemd[1]: Created slice kubepods-burstable-pod5c5b2cf1_f6bd_424f_bcaa_966201e849e3.slice - libcontainer container kubepods-burstable-pod5c5b2cf1_f6bd_424f_bcaa_966201e849e3.slice. Jul 9 23:49:54.319860 kubelet[2627]: I0709 23:49:54.319826 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmk6w\" (UniqueName: \"kubernetes.io/projected/e4fa8810-0181-4573-8ec7-24866dda5ac4-kube-api-access-bmk6w\") pod \"kube-proxy-lbp5l\" (UID: \"e4fa8810-0181-4573-8ec7-24866dda5ac4\") " pod="kube-system/kube-proxy-lbp5l" Jul 9 23:49:54.319860 kubelet[2627]: I0709 23:49:54.319861 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-cgroup\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320004 kubelet[2627]: I0709 23:49:54.319880 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cni-path\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320004 kubelet[2627]: I0709 23:49:54.319895 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-config-path\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320004 kubelet[2627]: I0709 23:49:54.319911 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hubble-tls\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320004 kubelet[2627]: I0709 23:49:54.319927 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4fa8810-0181-4573-8ec7-24866dda5ac4-kube-proxy\") pod \"kube-proxy-lbp5l\" (UID: \"e4fa8810-0181-4573-8ec7-24866dda5ac4\") " pod="kube-system/kube-proxy-lbp5l" Jul 9 23:49:54.320004 kubelet[2627]: I0709 23:49:54.319940 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-run\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320004 kubelet[2627]: I0709 23:49:54.319953 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-bpf-maps\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320122 kubelet[2627]: I0709 23:49:54.319974 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hostproc\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320122 kubelet[2627]: I0709 23:49:54.319991 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4fa8810-0181-4573-8ec7-24866dda5ac4-xtables-lock\") pod \"kube-proxy-lbp5l\" (UID: \"e4fa8810-0181-4573-8ec7-24866dda5ac4\") " pod="kube-system/kube-proxy-lbp5l" Jul 9 23:49:54.320122 kubelet[2627]: I0709 23:49:54.320008 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-etc-cni-netd\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320122 kubelet[2627]: I0709 23:49:54.320023 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-xtables-lock\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320122 kubelet[2627]: I0709 23:49:54.320038 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-clustermesh-secrets\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320122 kubelet[2627]: I0709 23:49:54.320052 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-lib-modules\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320234 kubelet[2627]: I0709 23:49:54.320065 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-net\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320234 kubelet[2627]: I0709 23:49:54.320078 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-kernel\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320234 kubelet[2627]: I0709 23:49:54.320103 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hvvq\" (UniqueName: \"kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-kube-api-access-8hvvq\") pod \"cilium-szd9m\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " pod="kube-system/cilium-szd9m" Jul 9 23:49:54.320234 kubelet[2627]: I0709 23:49:54.320122 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4fa8810-0181-4573-8ec7-24866dda5ac4-lib-modules\") pod \"kube-proxy-lbp5l\" (UID: \"e4fa8810-0181-4573-8ec7-24866dda5ac4\") " pod="kube-system/kube-proxy-lbp5l" Jul 9 23:49:54.533464 systemd[1]: Created slice kubepods-besteffort-podb67c43f2_520a_4507_8c6e_980ebe6fd384.slice - libcontainer container kubepods-besteffort-podb67c43f2_520a_4507_8c6e_980ebe6fd384.slice. Jul 9 23:49:54.605984 kubelet[2627]: E0709 23:49:54.605845 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:54.607069 containerd[1496]: time="2025-07-09T23:49:54.606737094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbp5l,Uid:e4fa8810-0181-4573-8ec7-24866dda5ac4,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:54.613598 kubelet[2627]: E0709 23:49:54.613568 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:54.614318 containerd[1496]: time="2025-07-09T23:49:54.614287736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szd9m,Uid:5c5b2cf1-f6bd-424f-bcaa-966201e849e3,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:54.624162 kubelet[2627]: I0709 23:49:54.624114 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b67c43f2-520a-4507-8c6e-980ebe6fd384-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-8swxk\" (UID: \"b67c43f2-520a-4507-8c6e-980ebe6fd384\") " pod="kube-system/cilium-operator-6c4d7847fc-8swxk" Jul 9 23:49:54.624162 kubelet[2627]: I0709 23:49:54.624160 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfjs\" (UniqueName: \"kubernetes.io/projected/b67c43f2-520a-4507-8c6e-980ebe6fd384-kube-api-access-xhfjs\") pod \"cilium-operator-6c4d7847fc-8swxk\" (UID: \"b67c43f2-520a-4507-8c6e-980ebe6fd384\") " pod="kube-system/cilium-operator-6c4d7847fc-8swxk" Jul 9 23:49:54.633907 containerd[1496]: time="2025-07-09T23:49:54.633869043Z" level=info msg="connecting to shim 5bee1e5a14921b39eb5e52aaa0c6e54dbe9208443211e97b6999a8b234390d2c" address="unix:///run/containerd/s/4e23a2d6294c4748cf627062781108690f8d05666bfdc390847c001f0725d399" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:54.638088 containerd[1496]: time="2025-07-09T23:49:54.638049883Z" level=info msg="connecting to shim 73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95" address="unix:///run/containerd/s/3421a5d8c6c14e92cbc6de42780e4369a2c7faaeafdb80ff885ae8d3083a85fa" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:54.662660 systemd[1]: Started cri-containerd-5bee1e5a14921b39eb5e52aaa0c6e54dbe9208443211e97b6999a8b234390d2c.scope - libcontainer container 5bee1e5a14921b39eb5e52aaa0c6e54dbe9208443211e97b6999a8b234390d2c. Jul 9 23:49:54.665750 systemd[1]: Started cri-containerd-73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95.scope - libcontainer container 73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95. Jul 9 23:49:54.695588 containerd[1496]: time="2025-07-09T23:49:54.695546944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-szd9m,Uid:5c5b2cf1-f6bd-424f-bcaa-966201e849e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\"" Jul 9 23:49:54.696315 kubelet[2627]: E0709 23:49:54.696286 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:54.697658 containerd[1496]: time="2025-07-09T23:49:54.697623752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lbp5l,Uid:e4fa8810-0181-4573-8ec7-24866dda5ac4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bee1e5a14921b39eb5e52aaa0c6e54dbe9208443211e97b6999a8b234390d2c\"" Jul 9 23:49:54.698093 containerd[1496]: time="2025-07-09T23:49:54.697977062Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 23:49:54.698487 kubelet[2627]: E0709 23:49:54.698413 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:54.702699 containerd[1496]: time="2025-07-09T23:49:54.702541036Z" level=info msg="CreateContainer within sandbox \"5bee1e5a14921b39eb5e52aaa0c6e54dbe9208443211e97b6999a8b234390d2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 23:49:54.738317 containerd[1496]: time="2025-07-09T23:49:54.738256574Z" level=info msg="Container c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:49:54.745889 containerd[1496]: time="2025-07-09T23:49:54.745844274Z" level=info msg="CreateContainer within sandbox \"5bee1e5a14921b39eb5e52aaa0c6e54dbe9208443211e97b6999a8b234390d2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d\"" Jul 9 23:49:54.747712 containerd[1496]: time="2025-07-09T23:49:54.747682224Z" level=info msg="StartContainer for \"c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d\"" Jul 9 23:49:54.750041 containerd[1496]: time="2025-07-09T23:49:54.750006485Z" level=info msg="connecting to shim c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d" address="unix:///run/containerd/s/4e23a2d6294c4748cf627062781108690f8d05666bfdc390847c001f0725d399" protocol=ttrpc version=3 Jul 9 23:49:54.775642 systemd[1]: Started cri-containerd-c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d.scope - libcontainer container c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d. Jul 9 23:49:54.817576 containerd[1496]: time="2025-07-09T23:49:54.817521404Z" level=info msg="StartContainer for \"c9100849c7eb9d2297e325f816856a9a38d8509bd718c78e787023a56726dd6d\" returns successfully" Jul 9 23:49:54.839852 kubelet[2627]: E0709 23:49:54.838637 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:54.840205 containerd[1496]: time="2025-07-09T23:49:54.839501009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8swxk,Uid:b67c43f2-520a-4507-8c6e-980ebe6fd384,Namespace:kube-system,Attempt:0,}" Jul 9 23:49:54.877532 containerd[1496]: time="2025-07-09T23:49:54.877381503Z" level=info msg="connecting to shim 3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205" address="unix:///run/containerd/s/f4add3b806034855d45408f4afeadfde00bff2bfc763689c29799ddf5fb5e63e" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:49:54.901671 systemd[1]: Started cri-containerd-3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205.scope - libcontainer container 3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205. Jul 9 23:49:54.952157 containerd[1496]: time="2025-07-09T23:49:54.951767667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-8swxk,Uid:b67c43f2-520a-4507-8c6e-980ebe6fd384,Namespace:kube-system,Attempt:0,} returns sandbox id \"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\"" Jul 9 23:49:54.953217 kubelet[2627]: E0709 23:49:54.953175 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:55.448573 kubelet[2627]: E0709 23:49:55.448542 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:55.754788 kubelet[2627]: E0709 23:49:55.754622 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:55.756267 kubelet[2627]: E0709 23:49:55.756245 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:55.781097 kubelet[2627]: I0709 23:49:55.781031 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lbp5l" podStartSLOduration=1.781013658 podStartE2EDuration="1.781013658s" podCreationTimestamp="2025-07-09 23:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:49:55.766760503 +0000 UTC m=+8.196228594" watchObservedRunningTime="2025-07-09 23:49:55.781013658 +0000 UTC m=+8.210481749" Jul 9 23:49:55.910928 kubelet[2627]: E0709 23:49:55.910803 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:56.758858 kubelet[2627]: E0709 23:49:56.758827 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:56.759325 kubelet[2627]: E0709 23:49:56.758891 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:57.760171 kubelet[2627]: E0709 23:49:57.760129 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:49:58.992809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593817026.mount: Deactivated successfully. Jul 9 23:50:02.397957 containerd[1496]: time="2025-07-09T23:50:02.397901952Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:02.398639 containerd[1496]: time="2025-07-09T23:50:02.398481187Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 23:50:02.399466 containerd[1496]: time="2025-07-09T23:50:02.399417736Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:02.401333 containerd[1496]: time="2025-07-09T23:50:02.401283556Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.70327375s" Jul 9 23:50:02.401333 containerd[1496]: time="2025-07-09T23:50:02.401323582Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 23:50:02.418280 containerd[1496]: time="2025-07-09T23:50:02.418245235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 23:50:02.439310 containerd[1496]: time="2025-07-09T23:50:02.439264998Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:50:02.468468 containerd[1496]: time="2025-07-09T23:50:02.467747720Z" level=info msg="Container efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:02.471806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422702486.mount: Deactivated successfully. Jul 9 23:50:02.475138 containerd[1496]: time="2025-07-09T23:50:02.475012670Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\"" Jul 9 23:50:02.483288 containerd[1496]: time="2025-07-09T23:50:02.481865085Z" level=info msg="StartContainer for \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\"" Jul 9 23:50:02.484202 containerd[1496]: time="2025-07-09T23:50:02.484163472Z" level=info msg="connecting to shim efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b" address="unix:///run/containerd/s/3421a5d8c6c14e92cbc6de42780e4369a2c7faaeafdb80ff885ae8d3083a85fa" protocol=ttrpc version=3 Jul 9 23:50:02.537669 systemd[1]: Started cri-containerd-efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b.scope - libcontainer container efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b. Jul 9 23:50:02.585667 containerd[1496]: time="2025-07-09T23:50:02.585607340Z" level=info msg="StartContainer for \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" returns successfully" Jul 9 23:50:02.645557 systemd[1]: cri-containerd-efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b.scope: Deactivated successfully. Jul 9 23:50:02.684178 containerd[1496]: time="2025-07-09T23:50:02.684124004Z" level=info msg="received exit event container_id:\"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" id:\"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" pid:3053 exited_at:{seconds:1752105002 nanos:668259537}" Jul 9 23:50:02.685377 containerd[1496]: time="2025-07-09T23:50:02.685348970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" id:\"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" pid:3053 exited_at:{seconds:1752105002 nanos:668259537}" Jul 9 23:50:02.731353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b-rootfs.mount: Deactivated successfully. Jul 9 23:50:02.778350 kubelet[2627]: E0709 23:50:02.778317 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:03.399273 update_engine[1485]: I20250709 23:50:03.399215 1485 update_attempter.cc:509] Updating boot flags... Jul 9 23:50:03.779572 kubelet[2627]: E0709 23:50:03.779368 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:03.833734 containerd[1496]: time="2025-07-09T23:50:03.833672512Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:50:03.870077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3573740813.mount: Deactivated successfully. Jul 9 23:50:03.898934 containerd[1496]: time="2025-07-09T23:50:03.898880323Z" level=info msg="Container d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:03.903355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127596837.mount: Deactivated successfully. Jul 9 23:50:03.922278 containerd[1496]: time="2025-07-09T23:50:03.922235057Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\"" Jul 9 23:50:03.923198 containerd[1496]: time="2025-07-09T23:50:03.923144995Z" level=info msg="StartContainer for \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\"" Jul 9 23:50:03.925452 containerd[1496]: time="2025-07-09T23:50:03.925400527Z" level=info msg="connecting to shim d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70" address="unix:///run/containerd/s/3421a5d8c6c14e92cbc6de42780e4369a2c7faaeafdb80ff885ae8d3083a85fa" protocol=ttrpc version=3 Jul 9 23:50:03.954936 systemd[1]: Started cri-containerd-d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70.scope - libcontainer container d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70. Jul 9 23:50:03.997667 containerd[1496]: time="2025-07-09T23:50:03.997620692Z" level=info msg="StartContainer for \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" returns successfully" Jul 9 23:50:04.060873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 23:50:04.061108 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:50:04.061499 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:50:04.063186 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 23:50:04.065846 containerd[1496]: time="2025-07-09T23:50:04.065797442Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" id:\"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" pid:3128 exited_at:{seconds:1752105004 nanos:65373334}" Jul 9 23:50:04.066573 systemd[1]: cri-containerd-d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70.scope: Deactivated successfully. Jul 9 23:50:04.066874 systemd[1]: cri-containerd-d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70.scope: Consumed 50ms CPU time, 5.8M memory peak, 2.3M written to disk. Jul 9 23:50:04.073789 containerd[1496]: time="2025-07-09T23:50:04.073692187Z" level=info msg="received exit event container_id:\"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" id:\"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" pid:3128 exited_at:{seconds:1752105004 nanos:65373334}" Jul 9 23:50:04.091583 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 23:50:04.328079 containerd[1496]: time="2025-07-09T23:50:04.327309964Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:04.328340 containerd[1496]: time="2025-07-09T23:50:04.328323369Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 23:50:04.329454 containerd[1496]: time="2025-07-09T23:50:04.329132917Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 23:50:04.330867 containerd[1496]: time="2025-07-09T23:50:04.330542079Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.912257178s" Jul 9 23:50:04.330867 containerd[1496]: time="2025-07-09T23:50:04.330589864Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 23:50:04.337411 containerd[1496]: time="2025-07-09T23:50:04.337365197Z" level=info msg="CreateContainer within sandbox \"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 23:50:04.348999 containerd[1496]: time="2025-07-09T23:50:04.348951994Z" level=info msg="Container 603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:04.359076 containerd[1496]: time="2025-07-09T23:50:04.358915416Z" level=info msg="CreateContainer within sandbox \"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\"" Jul 9 23:50:04.361062 containerd[1496]: time="2025-07-09T23:50:04.359488958Z" level=info msg="StartContainer for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\"" Jul 9 23:50:04.362252 containerd[1496]: time="2025-07-09T23:50:04.362220948Z" level=info msg="connecting to shim 603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3" address="unix:///run/containerd/s/f4add3b806034855d45408f4afeadfde00bff2bfc763689c29799ddf5fb5e63e" protocol=ttrpc version=3 Jul 9 23:50:04.383681 systemd[1]: Started cri-containerd-603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3.scope - libcontainer container 603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3. Jul 9 23:50:04.419865 containerd[1496]: time="2025-07-09T23:50:04.419759936Z" level=info msg="StartContainer for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" returns successfully" Jul 9 23:50:04.790686 kubelet[2627]: E0709 23:50:04.790649 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:04.796534 kubelet[2627]: E0709 23:50:04.796492 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:04.796887 containerd[1496]: time="2025-07-09T23:50:04.796822407Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:50:04.818691 containerd[1496]: time="2025-07-09T23:50:04.818643542Z" level=info msg="Container f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:04.834491 kubelet[2627]: I0709 23:50:04.833862 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-8swxk" podStartSLOduration=1.45723164 podStartE2EDuration="10.833843056s" podCreationTimestamp="2025-07-09 23:49:54 +0000 UTC" firstStartedPulling="2025-07-09 23:49:54.954786237 +0000 UTC m=+7.384254328" lastFinishedPulling="2025-07-09 23:50:04.331397653 +0000 UTC m=+16.760865744" observedRunningTime="2025-07-09 23:50:04.833404672 +0000 UTC m=+17.262872763" watchObservedRunningTime="2025-07-09 23:50:04.833843056 +0000 UTC m=+17.263311147" Jul 9 23:50:04.842271 containerd[1496]: time="2025-07-09T23:50:04.842206495Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\"" Jul 9 23:50:04.844047 containerd[1496]: time="2025-07-09T23:50:04.843605780Z" level=info msg="StartContainer for \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\"" Jul 9 23:50:04.848118 containerd[1496]: time="2025-07-09T23:50:04.848056716Z" level=info msg="connecting to shim f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58" address="unix:///run/containerd/s/3421a5d8c6c14e92cbc6de42780e4369a2c7faaeafdb80ff885ae8d3083a85fa" protocol=ttrpc version=3 Jul 9 23:50:04.869471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70-rootfs.mount: Deactivated successfully. Jul 9 23:50:04.887753 systemd[1]: Started cri-containerd-f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58.scope - libcontainer container f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58. Jul 9 23:50:04.957089 systemd[1]: cri-containerd-f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58.scope: Deactivated successfully. Jul 9 23:50:04.961987 containerd[1496]: time="2025-07-09T23:50:04.961945622Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" id:\"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" pid:3214 exited_at:{seconds:1752105004 nanos:961642436}" Jul 9 23:50:04.962464 containerd[1496]: time="2025-07-09T23:50:04.962239211Z" level=info msg="received exit event container_id:\"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" id:\"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" pid:3214 exited_at:{seconds:1752105004 nanos:961642436}" Jul 9 23:50:05.011703 containerd[1496]: time="2025-07-09T23:50:05.011633849Z" level=info msg="StartContainer for \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" returns successfully" Jul 9 23:50:05.045295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58-rootfs.mount: Deactivated successfully. Jul 9 23:50:05.794944 kubelet[2627]: E0709 23:50:05.794803 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:05.797024 kubelet[2627]: E0709 23:50:05.795598 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:05.804294 containerd[1496]: time="2025-07-09T23:50:05.804254430Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:50:05.822125 containerd[1496]: time="2025-07-09T23:50:05.822079314Z" level=info msg="Container fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:05.822578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669376681.mount: Deactivated successfully. Jul 9 23:50:05.830566 containerd[1496]: time="2025-07-09T23:50:05.830509616Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\"" Jul 9 23:50:05.831156 containerd[1496]: time="2025-07-09T23:50:05.831055777Z" level=info msg="StartContainer for \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\"" Jul 9 23:50:05.833467 containerd[1496]: time="2025-07-09T23:50:05.833408571Z" level=info msg="connecting to shim fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650" address="unix:///run/containerd/s/3421a5d8c6c14e92cbc6de42780e4369a2c7faaeafdb80ff885ae8d3083a85fa" protocol=ttrpc version=3 Jul 9 23:50:05.854645 systemd[1]: Started cri-containerd-fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650.scope - libcontainer container fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650. Jul 9 23:50:05.879706 systemd[1]: cri-containerd-fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650.scope: Deactivated successfully. Jul 9 23:50:05.881456 containerd[1496]: time="2025-07-09T23:50:05.881216154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" id:\"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" pid:3253 exited_at:{seconds:1752105005 nanos:880611291}" Jul 9 23:50:05.883171 containerd[1496]: time="2025-07-09T23:50:05.883141633Z" level=info msg="received exit event container_id:\"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" id:\"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" pid:3253 exited_at:{seconds:1752105005 nanos:880611291}" Jul 9 23:50:05.890106 containerd[1496]: time="2025-07-09T23:50:05.890069494Z" level=info msg="StartContainer for \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" returns successfully" Jul 9 23:50:05.903765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650-rootfs.mount: Deactivated successfully. Jul 9 23:50:06.805128 kubelet[2627]: E0709 23:50:06.805092 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:06.810746 containerd[1496]: time="2025-07-09T23:50:06.810701958Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:50:06.829878 containerd[1496]: time="2025-07-09T23:50:06.829827651Z" level=info msg="Container 206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:06.830144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316241518.mount: Deactivated successfully. Jul 9 23:50:06.842105 containerd[1496]: time="2025-07-09T23:50:06.842042633Z" level=info msg="CreateContainer within sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\"" Jul 9 23:50:06.843281 containerd[1496]: time="2025-07-09T23:50:06.843170964Z" level=info msg="StartContainer for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\"" Jul 9 23:50:06.844484 containerd[1496]: time="2025-07-09T23:50:06.844426181Z" level=info msg="connecting to shim 206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938" address="unix:///run/containerd/s/3421a5d8c6c14e92cbc6de42780e4369a2c7faaeafdb80ff885ae8d3083a85fa" protocol=ttrpc version=3 Jul 9 23:50:06.866660 systemd[1]: Started cri-containerd-206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938.scope - libcontainer container 206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938. Jul 9 23:50:06.919220 containerd[1496]: time="2025-07-09T23:50:06.919168315Z" level=info msg="StartContainer for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" returns successfully" Jul 9 23:50:07.049252 containerd[1496]: time="2025-07-09T23:50:07.049014539Z" level=info msg="TaskExit event in podsandbox handler container_id:\"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" id:\"df0a8ca79a77adf88d73224732941fce9d2dadfc1d095ee3012817247791804d\" pid:3319 exited_at:{seconds:1752105007 nanos:48642994}" Jul 9 23:50:07.102344 kubelet[2627]: I0709 23:50:07.102250 2627 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 23:50:07.166705 systemd[1]: Created slice kubepods-burstable-pod851d0eae_065b_41cf_bce5_83d3330abe75.slice - libcontainer container kubepods-burstable-pod851d0eae_065b_41cf_bce5_83d3330abe75.slice. Jul 9 23:50:07.179611 systemd[1]: Created slice kubepods-burstable-podc9bde777_3230_45bc_8224_89144e6c9077.slice - libcontainer container kubepods-burstable-podc9bde777_3230_45bc_8224_89144e6c9077.slice. Jul 9 23:50:07.313483 kubelet[2627]: I0709 23:50:07.313366 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9bde777-3230-45bc-8224-89144e6c9077-config-volume\") pod \"coredns-674b8bbfcf-f4kck\" (UID: \"c9bde777-3230-45bc-8224-89144e6c9077\") " pod="kube-system/coredns-674b8bbfcf-f4kck" Jul 9 23:50:07.313483 kubelet[2627]: I0709 23:50:07.313416 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgj7g\" (UniqueName: \"kubernetes.io/projected/851d0eae-065b-41cf-bce5-83d3330abe75-kube-api-access-dgj7g\") pod \"coredns-674b8bbfcf-lc4zm\" (UID: \"851d0eae-065b-41cf-bce5-83d3330abe75\") " pod="kube-system/coredns-674b8bbfcf-lc4zm" Jul 9 23:50:07.313483 kubelet[2627]: I0709 23:50:07.313455 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/851d0eae-065b-41cf-bce5-83d3330abe75-config-volume\") pod \"coredns-674b8bbfcf-lc4zm\" (UID: \"851d0eae-065b-41cf-bce5-83d3330abe75\") " pod="kube-system/coredns-674b8bbfcf-lc4zm" Jul 9 23:50:07.313483 kubelet[2627]: I0709 23:50:07.313474 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqmg\" (UniqueName: \"kubernetes.io/projected/c9bde777-3230-45bc-8224-89144e6c9077-kube-api-access-rwqmg\") pod \"coredns-674b8bbfcf-f4kck\" (UID: \"c9bde777-3230-45bc-8224-89144e6c9077\") " pod="kube-system/coredns-674b8bbfcf-f4kck" Jul 9 23:50:07.473115 kubelet[2627]: E0709 23:50:07.473077 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:07.475254 containerd[1496]: time="2025-07-09T23:50:07.475206907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lc4zm,Uid:851d0eae-065b-41cf-bce5-83d3330abe75,Namespace:kube-system,Attempt:0,}" Jul 9 23:50:07.505066 kubelet[2627]: E0709 23:50:07.501611 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:07.505203 containerd[1496]: time="2025-07-09T23:50:07.504190361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4kck,Uid:c9bde777-3230-45bc-8224-89144e6c9077,Namespace:kube-system,Attempt:0,}" Jul 9 23:50:07.811526 kubelet[2627]: E0709 23:50:07.811405 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:07.837911 kubelet[2627]: I0709 23:50:07.837847 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-szd9m" podStartSLOduration=6.122010331 podStartE2EDuration="13.837830122s" podCreationTimestamp="2025-07-09 23:49:54 +0000 UTC" firstStartedPulling="2025-07-09 23:49:54.697664248 +0000 UTC m=+7.127132339" lastFinishedPulling="2025-07-09 23:50:02.413484039 +0000 UTC m=+14.842952130" observedRunningTime="2025-07-09 23:50:07.837685639 +0000 UTC m=+20.267153770" watchObservedRunningTime="2025-07-09 23:50:07.837830122 +0000 UTC m=+20.267298213" Jul 9 23:50:08.813765 kubelet[2627]: E0709 23:50:08.813735 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:09.478627 systemd-networkd[1422]: cilium_host: Link UP Jul 9 23:50:09.479233 systemd-networkd[1422]: cilium_net: Link UP Jul 9 23:50:09.479677 systemd-networkd[1422]: cilium_net: Gained carrier Jul 9 23:50:09.479903 systemd-networkd[1422]: cilium_host: Gained carrier Jul 9 23:50:09.574582 systemd-networkd[1422]: cilium_vxlan: Link UP Jul 9 23:50:09.574589 systemd-networkd[1422]: cilium_vxlan: Gained carrier Jul 9 23:50:09.817293 kubelet[2627]: E0709 23:50:09.817051 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:09.979553 kernel: NET: Registered PF_ALG protocol family Jul 9 23:50:10.191545 systemd-networkd[1422]: cilium_host: Gained IPv6LL Jul 9 23:50:10.446576 systemd-networkd[1422]: cilium_net: Gained IPv6LL Jul 9 23:50:10.578948 systemd-networkd[1422]: lxc_health: Link UP Jul 9 23:50:10.579228 systemd-networkd[1422]: lxc_health: Gained carrier Jul 9 23:50:10.817390 kubelet[2627]: E0709 23:50:10.817286 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:11.138464 kernel: eth0: renamed from tmp47084 Jul 9 23:50:11.138348 systemd-networkd[1422]: lxcd8b75576de79: Link UP Jul 9 23:50:11.140018 systemd-networkd[1422]: lxcd8b75576de79: Gained carrier Jul 9 23:50:11.144883 systemd-networkd[1422]: lxc76125f0504d1: Link UP Jul 9 23:50:11.155460 kernel: eth0: renamed from tmp11368 Jul 9 23:50:11.156034 systemd-networkd[1422]: lxc76125f0504d1: Gained carrier Jul 9 23:50:11.471600 systemd-networkd[1422]: cilium_vxlan: Gained IPv6LL Jul 9 23:50:11.819867 kubelet[2627]: E0709 23:50:11.819424 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:11.982722 systemd-networkd[1422]: lxc_health: Gained IPv6LL Jul 9 23:50:12.431518 systemd-networkd[1422]: lxcd8b75576de79: Gained IPv6LL Jul 9 23:50:12.821822 kubelet[2627]: E0709 23:50:12.821719 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:13.198682 systemd-networkd[1422]: lxc76125f0504d1: Gained IPv6LL Jul 9 23:50:14.786093 containerd[1496]: time="2025-07-09T23:50:14.786043809Z" level=info msg="connecting to shim 470847453592dfb26d7760d8b62b501fc57792b996e1e650a4c26e51308977d7" address="unix:///run/containerd/s/366e64d0aa560ad415bd8616f91dc905e658dfdefde000535d3d4e0ac5301fa9" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:50:14.792609 containerd[1496]: time="2025-07-09T23:50:14.792563506Z" level=info msg="connecting to shim 113689868abd43325a1b1cfa777f6b78ab67b89d9dfcaa1c01cabf976ecf44a6" address="unix:///run/containerd/s/28cef19d4f299312dff7c1664763ce963b749b3d40cf44f9b7595cece1dd9aed" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:50:14.814637 systemd[1]: Started cri-containerd-470847453592dfb26d7760d8b62b501fc57792b996e1e650a4c26e51308977d7.scope - libcontainer container 470847453592dfb26d7760d8b62b501fc57792b996e1e650a4c26e51308977d7. Jul 9 23:50:14.823795 systemd[1]: Started cri-containerd-113689868abd43325a1b1cfa777f6b78ab67b89d9dfcaa1c01cabf976ecf44a6.scope - libcontainer container 113689868abd43325a1b1cfa777f6b78ab67b89d9dfcaa1c01cabf976ecf44a6. Jul 9 23:50:14.835356 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:50:14.839178 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 23:50:14.867308 containerd[1496]: time="2025-07-09T23:50:14.867250807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lc4zm,Uid:851d0eae-065b-41cf-bce5-83d3330abe75,Namespace:kube-system,Attempt:0,} returns sandbox id \"470847453592dfb26d7760d8b62b501fc57792b996e1e650a4c26e51308977d7\"" Jul 9 23:50:14.868162 kubelet[2627]: E0709 23:50:14.868138 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:14.869330 containerd[1496]: time="2025-07-09T23:50:14.869300913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-f4kck,Uid:c9bde777-3230-45bc-8224-89144e6c9077,Namespace:kube-system,Attempt:0,} returns sandbox id \"113689868abd43325a1b1cfa777f6b78ab67b89d9dfcaa1c01cabf976ecf44a6\"" Jul 9 23:50:14.869924 kubelet[2627]: E0709 23:50:14.869904 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:14.873525 containerd[1496]: time="2025-07-09T23:50:14.873199797Z" level=info msg="CreateContainer within sandbox \"113689868abd43325a1b1cfa777f6b78ab67b89d9dfcaa1c01cabf976ecf44a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:50:14.874654 containerd[1496]: time="2025-07-09T23:50:14.874618126Z" level=info msg="CreateContainer within sandbox \"470847453592dfb26d7760d8b62b501fc57792b996e1e650a4c26e51308977d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 23:50:14.882326 containerd[1496]: time="2025-07-09T23:50:14.882285676Z" level=info msg="Container 03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:14.889126 containerd[1496]: time="2025-07-09T23:50:14.889082567Z" level=info msg="Container 77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:50:14.897976 containerd[1496]: time="2025-07-09T23:50:14.897922686Z" level=info msg="CreateContainer within sandbox \"470847453592dfb26d7760d8b62b501fc57792b996e1e650a4c26e51308977d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813\"" Jul 9 23:50:14.898419 containerd[1496]: time="2025-07-09T23:50:14.898389890Z" level=info msg="StartContainer for \"03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813\"" Jul 9 23:50:14.899281 containerd[1496]: time="2025-07-09T23:50:14.899242871Z" level=info msg="connecting to shim 03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813" address="unix:///run/containerd/s/366e64d0aa560ad415bd8616f91dc905e658dfdefde000535d3d4e0ac5301fa9" protocol=ttrpc version=3 Jul 9 23:50:14.903705 containerd[1496]: time="2025-07-09T23:50:14.903660910Z" level=info msg="CreateContainer within sandbox \"113689868abd43325a1b1cfa777f6b78ab67b89d9dfcaa1c01cabf976ecf44a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9\"" Jul 9 23:50:14.904296 containerd[1496]: time="2025-07-09T23:50:14.904263452Z" level=info msg="StartContainer for \"77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9\"" Jul 9 23:50:14.905196 containerd[1496]: time="2025-07-09T23:50:14.905147588Z" level=info msg="connecting to shim 77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9" address="unix:///run/containerd/s/28cef19d4f299312dff7c1664763ce963b749b3d40cf44f9b7595cece1dd9aed" protocol=ttrpc version=3 Jul 9 23:50:14.921680 systemd[1]: Started cri-containerd-03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813.scope - libcontainer container 03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813. Jul 9 23:50:14.924670 systemd[1]: Started cri-containerd-77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9.scope - libcontainer container 77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9. Jul 9 23:50:14.974831 containerd[1496]: time="2025-07-09T23:50:14.974784113Z" level=info msg="StartContainer for \"03b151aa10c1e2815c1365c7e8b8e27402204c40d89d502d9c3cbae4af2b8813\" returns successfully" Jul 9 23:50:14.975048 containerd[1496]: time="2025-07-09T23:50:14.974928649Z" level=info msg="StartContainer for \"77db00ace129596dcf644dd51771b9a6a8166852e621aae305fb94b74d63d6e9\" returns successfully" Jul 9 23:50:15.831124 kubelet[2627]: E0709 23:50:15.831068 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:15.832094 kubelet[2627]: E0709 23:50:15.831765 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:15.842899 kubelet[2627]: I0709 23:50:15.842826 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lc4zm" podStartSLOduration=21.84279912 podStartE2EDuration="21.84279912s" podCreationTimestamp="2025-07-09 23:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:50:15.841876021 +0000 UTC m=+28.271344072" watchObservedRunningTime="2025-07-09 23:50:15.84279912 +0000 UTC m=+28.272267211" Jul 9 23:50:15.854606 kubelet[2627]: I0709 23:50:15.854538 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-f4kck" podStartSLOduration=21.854519648 podStartE2EDuration="21.854519648s" podCreationTimestamp="2025-07-09 23:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:50:15.853410898 +0000 UTC m=+28.282878989" watchObservedRunningTime="2025-07-09 23:50:15.854519648 +0000 UTC m=+28.283987779" Jul 9 23:50:16.833207 kubelet[2627]: E0709 23:50:16.833147 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:16.833207 kubelet[2627]: E0709 23:50:16.833185 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:17.412978 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:38796.service - OpenSSH per-connection server daemon (10.0.0.1:38796). Jul 9 23:50:17.483455 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 38796 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:17.485204 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:17.489641 systemd-logind[1479]: New session 8 of user core. Jul 9 23:50:17.499663 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 23:50:17.654122 sshd[3982]: Connection closed by 10.0.0.1 port 38796 Jul 9 23:50:17.654505 sshd-session[3980]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:17.658504 systemd-logind[1479]: Session 8 logged out. Waiting for processes to exit. Jul 9 23:50:17.658735 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:38796.service: Deactivated successfully. Jul 9 23:50:17.660185 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 23:50:17.661854 systemd-logind[1479]: Removed session 8. Jul 9 23:50:17.834182 kubelet[2627]: E0709 23:50:17.834153 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:50:22.665842 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:41840.service - OpenSSH per-connection server daemon (10.0.0.1:41840). Jul 9 23:50:22.731515 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 41840 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:22.732014 sshd-session[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:22.735789 systemd-logind[1479]: New session 9 of user core. Jul 9 23:50:22.745574 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 23:50:22.884239 sshd[4000]: Connection closed by 10.0.0.1 port 41840 Jul 9 23:50:22.884128 sshd-session[3998]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:22.887911 systemd-logind[1479]: Session 9 logged out. Waiting for processes to exit. Jul 9 23:50:22.887975 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:41840.service: Deactivated successfully. Jul 9 23:50:22.889644 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 23:50:22.892753 systemd-logind[1479]: Removed session 9. Jul 9 23:50:27.895379 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:41846.service - OpenSSH per-connection server daemon (10.0.0.1:41846). Jul 9 23:50:27.960058 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 41846 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:27.963451 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:27.971511 systemd-logind[1479]: New session 10 of user core. Jul 9 23:50:27.978668 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 23:50:28.113618 sshd[4020]: Connection closed by 10.0.0.1 port 41846 Jul 9 23:50:28.114654 sshd-session[4018]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:28.128194 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:41846.service: Deactivated successfully. Jul 9 23:50:28.130235 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 23:50:28.131074 systemd-logind[1479]: Session 10 logged out. Waiting for processes to exit. Jul 9 23:50:28.134349 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:41848.service - OpenSSH per-connection server daemon (10.0.0.1:41848). Jul 9 23:50:28.136988 systemd-logind[1479]: Removed session 10. Jul 9 23:50:28.196493 sshd[4034]: Accepted publickey for core from 10.0.0.1 port 41848 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:28.197640 sshd-session[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:28.202453 systemd-logind[1479]: New session 11 of user core. Jul 9 23:50:28.212640 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 23:50:28.378186 sshd[4036]: Connection closed by 10.0.0.1 port 41848 Jul 9 23:50:28.378089 sshd-session[4034]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:28.389865 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:41848.service: Deactivated successfully. Jul 9 23:50:28.397312 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 23:50:28.399068 systemd-logind[1479]: Session 11 logged out. Waiting for processes to exit. Jul 9 23:50:28.404855 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:41858.service - OpenSSH per-connection server daemon (10.0.0.1:41858). Jul 9 23:50:28.407745 systemd-logind[1479]: Removed session 11. Jul 9 23:50:28.482306 sshd[4048]: Accepted publickey for core from 10.0.0.1 port 41858 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:28.483932 sshd-session[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:28.488061 systemd-logind[1479]: New session 12 of user core. Jul 9 23:50:28.497680 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 23:50:28.640836 sshd[4050]: Connection closed by 10.0.0.1 port 41858 Jul 9 23:50:28.641478 sshd-session[4048]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:28.646324 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:41858.service: Deactivated successfully. Jul 9 23:50:28.649247 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 23:50:28.650152 systemd-logind[1479]: Session 12 logged out. Waiting for processes to exit. Jul 9 23:50:28.652165 systemd-logind[1479]: Removed session 12. Jul 9 23:50:33.660207 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:34592.service - OpenSSH per-connection server daemon (10.0.0.1:34592). Jul 9 23:50:33.723878 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 34592 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:33.725375 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:33.730410 systemd-logind[1479]: New session 13 of user core. Jul 9 23:50:33.738653 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 23:50:33.867530 sshd[4066]: Connection closed by 10.0.0.1 port 34592 Jul 9 23:50:33.867422 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:33.871597 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:34592.service: Deactivated successfully. Jul 9 23:50:33.873408 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 23:50:33.874115 systemd-logind[1479]: Session 13 logged out. Waiting for processes to exit. Jul 9 23:50:33.875588 systemd-logind[1479]: Removed session 13. Jul 9 23:50:38.879948 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:34604.service - OpenSSH per-connection server daemon (10.0.0.1:34604). Jul 9 23:50:38.955975 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 34604 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:38.957119 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:38.962298 systemd-logind[1479]: New session 14 of user core. Jul 9 23:50:38.969590 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 23:50:39.105571 sshd[4081]: Connection closed by 10.0.0.1 port 34604 Jul 9 23:50:39.104713 sshd-session[4079]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:39.114088 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:34604.service: Deactivated successfully. Jul 9 23:50:39.116034 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 23:50:39.116832 systemd-logind[1479]: Session 14 logged out. Waiting for processes to exit. Jul 9 23:50:39.120683 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:34608.service - OpenSSH per-connection server daemon (10.0.0.1:34608). Jul 9 23:50:39.121809 systemd-logind[1479]: Removed session 14. Jul 9 23:50:39.181616 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 34608 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:39.183024 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:39.188452 systemd-logind[1479]: New session 15 of user core. Jul 9 23:50:39.197654 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 23:50:39.908133 sshd[4097]: Connection closed by 10.0.0.1 port 34608 Jul 9 23:50:39.908770 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:39.921810 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:34608.service: Deactivated successfully. Jul 9 23:50:39.923619 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 23:50:39.924313 systemd-logind[1479]: Session 15 logged out. Waiting for processes to exit. Jul 9 23:50:39.927131 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:34620.service - OpenSSH per-connection server daemon (10.0.0.1:34620). Jul 9 23:50:39.929079 systemd-logind[1479]: Removed session 15. Jul 9 23:50:39.995050 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 34620 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:39.996500 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:40.001285 systemd-logind[1479]: New session 16 of user core. Jul 9 23:50:40.008610 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 23:50:40.771430 sshd[4110]: Connection closed by 10.0.0.1 port 34620 Jul 9 23:50:40.772027 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:40.784133 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:34620.service: Deactivated successfully. Jul 9 23:50:40.788430 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 23:50:40.791772 systemd-logind[1479]: Session 16 logged out. Waiting for processes to exit. Jul 9 23:50:40.795931 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:34630.service - OpenSSH per-connection server daemon (10.0.0.1:34630). Jul 9 23:50:40.797109 systemd-logind[1479]: Removed session 16. Jul 9 23:50:40.867490 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 34630 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:40.868386 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:40.872890 systemd-logind[1479]: New session 17 of user core. Jul 9 23:50:40.882669 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 23:50:41.121226 sshd[4135]: Connection closed by 10.0.0.1 port 34630 Jul 9 23:50:41.121643 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:41.133552 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:34630.service: Deactivated successfully. Jul 9 23:50:41.135864 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 23:50:41.137421 systemd-logind[1479]: Session 17 logged out. Waiting for processes to exit. Jul 9 23:50:41.142005 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:34634.service - OpenSSH per-connection server daemon (10.0.0.1:34634). Jul 9 23:50:41.144168 systemd-logind[1479]: Removed session 17. Jul 9 23:50:41.195418 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 34634 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:41.196916 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:41.202359 systemd-logind[1479]: New session 18 of user core. Jul 9 23:50:41.212683 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 23:50:41.328477 sshd[4148]: Connection closed by 10.0.0.1 port 34634 Jul 9 23:50:41.329000 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:41.333350 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:34634.service: Deactivated successfully. Jul 9 23:50:41.335985 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 23:50:41.336982 systemd-logind[1479]: Session 18 logged out. Waiting for processes to exit. Jul 9 23:50:41.338292 systemd-logind[1479]: Removed session 18. Jul 9 23:50:46.344854 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:57822.service - OpenSSH per-connection server daemon (10.0.0.1:57822). Jul 9 23:50:46.409475 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 57822 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:46.410319 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:46.415491 systemd-logind[1479]: New session 19 of user core. Jul 9 23:50:46.424679 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 23:50:46.537785 sshd[4169]: Connection closed by 10.0.0.1 port 57822 Jul 9 23:50:46.538284 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:46.542022 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:57822.service: Deactivated successfully. Jul 9 23:50:46.543759 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 23:50:46.544420 systemd-logind[1479]: Session 19 logged out. Waiting for processes to exit. Jul 9 23:50:46.545499 systemd-logind[1479]: Removed session 19. Jul 9 23:50:51.549919 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:57836.service - OpenSSH per-connection server daemon (10.0.0.1:57836). Jul 9 23:50:51.605730 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 57836 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:51.607118 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:51.611287 systemd-logind[1479]: New session 20 of user core. Jul 9 23:50:51.617651 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 23:50:51.730624 sshd[4186]: Connection closed by 10.0.0.1 port 57836 Jul 9 23:50:51.730971 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:51.734379 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:57836.service: Deactivated successfully. Jul 9 23:50:51.736219 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 23:50:51.737508 systemd-logind[1479]: Session 20 logged out. Waiting for processes to exit. Jul 9 23:50:51.738665 systemd-logind[1479]: Removed session 20. Jul 9 23:50:56.749650 systemd[1]: Started sshd@20-10.0.0.74:22-10.0.0.1:50342.service - OpenSSH per-connection server daemon (10.0.0.1:50342). Jul 9 23:50:56.813589 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 50342 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:56.813474 sshd-session[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:56.821000 systemd-logind[1479]: New session 21 of user core. Jul 9 23:50:56.830714 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 23:50:56.974771 sshd[4203]: Connection closed by 10.0.0.1 port 50342 Jul 9 23:50:56.973727 sshd-session[4201]: pam_unix(sshd:session): session closed for user core Jul 9 23:50:56.987865 systemd[1]: sshd@20-10.0.0.74:22-10.0.0.1:50342.service: Deactivated successfully. Jul 9 23:50:56.989650 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 23:50:56.990722 systemd-logind[1479]: Session 21 logged out. Waiting for processes to exit. Jul 9 23:50:56.993837 systemd[1]: Started sshd@21-10.0.0.74:22-10.0.0.1:50354.service - OpenSSH per-connection server daemon (10.0.0.1:50354). Jul 9 23:50:56.996159 systemd-logind[1479]: Removed session 21. Jul 9 23:50:57.068346 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 50354 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:50:57.070038 sshd-session[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:50:57.079298 systemd-logind[1479]: New session 22 of user core. Jul 9 23:50:57.087642 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 23:50:59.122290 containerd[1496]: time="2025-07-09T23:50:59.122125446Z" level=info msg="StopContainer for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" with timeout 30 (s)" Jul 9 23:50:59.123385 containerd[1496]: time="2025-07-09T23:50:59.123228790Z" level=info msg="Stop container \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" with signal terminated" Jul 9 23:50:59.134582 systemd[1]: cri-containerd-603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3.scope: Deactivated successfully. Jul 9 23:50:59.137070 containerd[1496]: time="2025-07-09T23:50:59.137032791Z" level=info msg="received exit event container_id:\"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" id:\"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" pid:3179 exited_at:{seconds:1752105059 nanos:136623237}" Jul 9 23:50:59.137258 containerd[1496]: time="2025-07-09T23:50:59.137227348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" id:\"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" pid:3179 exited_at:{seconds:1752105059 nanos:136623237}" Jul 9 23:50:59.148765 containerd[1496]: time="2025-07-09T23:50:59.148713342Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 23:50:59.154452 containerd[1496]: time="2025-07-09T23:50:59.154316021Z" level=info msg="TaskExit event in podsandbox handler container_id:\"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" id:\"b490d73483c0d28400a80319ae873ffe9261ee122c2089b27d16acc6d5de6a68\" pid:4248 exited_at:{seconds:1752105059 nanos:153800588}" Jul 9 23:50:59.155794 containerd[1496]: time="2025-07-09T23:50:59.155689401Z" level=info msg="StopContainer for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" with timeout 2 (s)" Jul 9 23:50:59.156568 containerd[1496]: time="2025-07-09T23:50:59.155965637Z" level=info msg="Stop container \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" with signal terminated" Jul 9 23:50:59.163895 systemd-networkd[1422]: lxc_health: Link DOWN Jul 9 23:50:59.163903 systemd-networkd[1422]: lxc_health: Lost carrier Jul 9 23:50:59.170365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3-rootfs.mount: Deactivated successfully. Jul 9 23:50:59.186025 systemd[1]: cri-containerd-206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938.scope: Deactivated successfully. Jul 9 23:50:59.186312 systemd[1]: cri-containerd-206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938.scope: Consumed 6.856s CPU time, 123.3M memory peak, 128K read from disk, 12.9M written to disk. Jul 9 23:50:59.188188 containerd[1496]: time="2025-07-09T23:50:59.188149732Z" level=info msg="received exit event container_id:\"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" id:\"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" pid:3289 exited_at:{seconds:1752105059 nanos:187914575}" Jul 9 23:50:59.188729 containerd[1496]: time="2025-07-09T23:50:59.188294250Z" level=info msg="TaskExit event in podsandbox handler container_id:\"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" id:\"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" pid:3289 exited_at:{seconds:1752105059 nanos:187914575}" Jul 9 23:50:59.188729 containerd[1496]: time="2025-07-09T23:50:59.188610085Z" level=info msg="StopContainer for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" returns successfully" Jul 9 23:50:59.191646 containerd[1496]: time="2025-07-09T23:50:59.191583362Z" level=info msg="StopPodSandbox for \"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\"" Jul 9 23:50:59.204253 containerd[1496]: time="2025-07-09T23:50:59.204209780Z" level=info msg="Container to stop \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:59.205980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938-rootfs.mount: Deactivated successfully. Jul 9 23:50:59.211944 systemd[1]: cri-containerd-3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205.scope: Deactivated successfully. Jul 9 23:50:59.217133 containerd[1496]: time="2025-07-09T23:50:59.217099073Z" level=info msg="StopContainer for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" returns successfully" Jul 9 23:50:59.217845 containerd[1496]: time="2025-07-09T23:50:59.217818943Z" level=info msg="StopPodSandbox for \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\"" Jul 9 23:50:59.217906 containerd[1496]: time="2025-07-09T23:50:59.217893942Z" level=info msg="Container to stop \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:59.217982 containerd[1496]: time="2025-07-09T23:50:59.217907262Z" level=info msg="Container to stop \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:59.217982 containerd[1496]: time="2025-07-09T23:50:59.217916222Z" level=info msg="Container to stop \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:59.217982 containerd[1496]: time="2025-07-09T23:50:59.217937941Z" level=info msg="Container to stop \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:59.217982 containerd[1496]: time="2025-07-09T23:50:59.217946981Z" level=info msg="Container to stop \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 23:50:59.223143 containerd[1496]: time="2025-07-09T23:50:59.223108067Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" pid:2866 exit_status:137 exited_at:{seconds:1752105059 nanos:222736232}" Jul 9 23:50:59.223216 systemd[1]: cri-containerd-73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95.scope: Deactivated successfully. Jul 9 23:50:59.243206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95-rootfs.mount: Deactivated successfully. Jul 9 23:50:59.247601 containerd[1496]: time="2025-07-09T23:50:59.247551833Z" level=info msg="shim disconnected" id=73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95 namespace=k8s.io Jul 9 23:50:59.250236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205-rootfs.mount: Deactivated successfully. Jul 9 23:50:59.260886 containerd[1496]: time="2025-07-09T23:50:59.247589593Z" level=warning msg="cleaning up after shim disconnected" id=73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95 namespace=k8s.io Jul 9 23:50:59.261034 containerd[1496]: time="2025-07-09T23:50:59.260895280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:50:59.261034 containerd[1496]: time="2025-07-09T23:50:59.256580463Z" level=info msg="shim disconnected" id=3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205 namespace=k8s.io Jul 9 23:50:59.261034 containerd[1496]: time="2025-07-09T23:50:59.260975759Z" level=warning msg="cleaning up after shim disconnected" id=3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205 namespace=k8s.io Jul 9 23:50:59.261034 containerd[1496]: time="2025-07-09T23:50:59.260997439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 23:50:59.276503 containerd[1496]: time="2025-07-09T23:50:59.274917158Z" level=info msg="received exit event sandbox_id:\"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" exit_status:137 exited_at:{seconds:1752105059 nanos:224136852}" Jul 9 23:50:59.276503 containerd[1496]: time="2025-07-09T23:50:59.275025436Z" level=info msg="TearDown network for sandbox \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" successfully" Jul 9 23:50:59.276503 containerd[1496]: time="2025-07-09T23:50:59.275047876Z" level=info msg="StopPodSandbox for \"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" returns successfully" Jul 9 23:50:59.276957 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95-shm.mount: Deactivated successfully. Jul 9 23:50:59.277307 containerd[1496]: time="2025-07-09T23:50:59.277190325Z" level=info msg="received exit event sandbox_id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" exit_status:137 exited_at:{seconds:1752105059 nanos:222736232}" Jul 9 23:50:59.277692 containerd[1496]: time="2025-07-09T23:50:59.277522960Z" level=error msg="Failed to handle event container_id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" pid:2866 exit_status:137 exited_at:{seconds:1752105059 nanos:222736232} for 3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Jul 9 23:50:59.277692 containerd[1496]: time="2025-07-09T23:50:59.277569279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" id:\"73935640d2195072ef0d3d67200f1011e6e0bc76cf4f61be1930650fc6de9b95\" pid:2784 exit_status:137 exited_at:{seconds:1752105059 nanos:224136852}" Jul 9 23:50:59.279569 containerd[1496]: time="2025-07-09T23:50:59.279525771Z" level=info msg="TearDown network for sandbox \"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" successfully" Jul 9 23:50:59.279569 containerd[1496]: time="2025-07-09T23:50:59.279556171Z" level=info msg="StopPodSandbox for \"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" returns successfully" Jul 9 23:50:59.373752 kubelet[2627]: I0709 23:50:59.373630 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfjs\" (UniqueName: \"kubernetes.io/projected/b67c43f2-520a-4507-8c6e-980ebe6fd384-kube-api-access-xhfjs\") pod \"b67c43f2-520a-4507-8c6e-980ebe6fd384\" (UID: \"b67c43f2-520a-4507-8c6e-980ebe6fd384\") " Jul 9 23:50:59.373752 kubelet[2627]: I0709 23:50:59.373673 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-cgroup\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.373752 kubelet[2627]: I0709 23:50:59.373697 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-run\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.373752 kubelet[2627]: I0709 23:50:59.373712 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-net\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.373752 kubelet[2627]: I0709 23:50:59.373728 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-kernel\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.373752 kubelet[2627]: I0709 23:50:59.373746 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hvvq\" (UniqueName: \"kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-kube-api-access-8hvvq\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374194 kubelet[2627]: I0709 23:50:59.373760 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-bpf-maps\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374194 kubelet[2627]: I0709 23:50:59.373773 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hostproc\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374194 kubelet[2627]: I0709 23:50:59.373786 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-etc-cni-netd\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374194 kubelet[2627]: I0709 23:50:59.373804 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hubble-tls\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374194 kubelet[2627]: I0709 23:50:59.373820 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-lib-modules\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374194 kubelet[2627]: I0709 23:50:59.373838 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b67c43f2-520a-4507-8c6e-980ebe6fd384-cilium-config-path\") pod \"b67c43f2-520a-4507-8c6e-980ebe6fd384\" (UID: \"b67c43f2-520a-4507-8c6e-980ebe6fd384\") " Jul 9 23:50:59.374318 kubelet[2627]: I0709 23:50:59.373853 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cni-path\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374318 kubelet[2627]: I0709 23:50:59.373893 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-config-path\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374318 kubelet[2627]: I0709 23:50:59.373910 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-xtables-lock\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.374318 kubelet[2627]: I0709 23:50:59.373931 2627 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-clustermesh-secrets\") pod \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\" (UID: \"5c5b2cf1-f6bd-424f-bcaa-966201e849e3\") " Jul 9 23:50:59.376099 kubelet[2627]: I0709 23:50:59.375865 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hostproc" (OuterVolumeSpecName: "hostproc") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.380926 kubelet[2627]: I0709 23:50:59.380895 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 23:50:59.380995 kubelet[2627]: I0709 23:50:59.380943 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.380995 kubelet[2627]: I0709 23:50:59.380960 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.380995 kubelet[2627]: I0709 23:50:59.380973 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.380995 kubelet[2627]: I0709 23:50:59.380986 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.381467 kubelet[2627]: I0709 23:50:59.381419 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b67c43f2-520a-4507-8c6e-980ebe6fd384-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b67c43f2-520a-4507-8c6e-980ebe6fd384" (UID: "b67c43f2-520a-4507-8c6e-980ebe6fd384"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:50:59.381581 kubelet[2627]: I0709 23:50:59.381566 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.381652 kubelet[2627]: I0709 23:50:59.381640 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.381723 kubelet[2627]: I0709 23:50:59.381711 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cni-path" (OuterVolumeSpecName: "cni-path") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.382037 kubelet[2627]: I0709 23:50:59.381991 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 23:50:59.382184 kubelet[2627]: I0709 23:50:59.382050 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.382184 kubelet[2627]: I0709 23:50:59.382068 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 23:50:59.382184 kubelet[2627]: I0709 23:50:59.382120 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b67c43f2-520a-4507-8c6e-980ebe6fd384-kube-api-access-xhfjs" (OuterVolumeSpecName: "kube-api-access-xhfjs") pod "b67c43f2-520a-4507-8c6e-980ebe6fd384" (UID: "b67c43f2-520a-4507-8c6e-980ebe6fd384"). InnerVolumeSpecName "kube-api-access-xhfjs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:50:59.382830 kubelet[2627]: I0709 23:50:59.382787 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:50:59.383729 kubelet[2627]: I0709 23:50:59.383694 2627 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-kube-api-access-8hvvq" (OuterVolumeSpecName: "kube-api-access-8hvvq") pod "5c5b2cf1-f6bd-424f-bcaa-966201e849e3" (UID: "5c5b2cf1-f6bd-424f-bcaa-966201e849e3"). InnerVolumeSpecName "kube-api-access-8hvvq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 23:50:59.474739 kubelet[2627]: I0709 23:50:59.474680 2627 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474739 kubelet[2627]: I0709 23:50:59.474721 2627 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474739 kubelet[2627]: I0709 23:50:59.474738 2627 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8hvvq\" (UniqueName: \"kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-kube-api-access-8hvvq\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474739 kubelet[2627]: I0709 23:50:59.474754 2627 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474770 2627 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474784 2627 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474797 2627 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474811 2627 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474825 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b67c43f2-520a-4507-8c6e-980ebe6fd384-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474838 2627 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474851 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.474946 kubelet[2627]: I0709 23:50:59.474859 2627 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.475091 kubelet[2627]: I0709 23:50:59.474868 2627 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.475091 kubelet[2627]: I0709 23:50:59.474876 2627 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xhfjs\" (UniqueName: \"kubernetes.io/projected/b67c43f2-520a-4507-8c6e-980ebe6fd384-kube-api-access-xhfjs\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.475091 kubelet[2627]: I0709 23:50:59.474884 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.475091 kubelet[2627]: I0709 23:50:59.474892 2627 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5c5b2cf1-f6bd-424f-bcaa-966201e849e3-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 23:50:59.724948 systemd[1]: Removed slice kubepods-besteffort-podb67c43f2_520a_4507_8c6e_980ebe6fd384.slice - libcontainer container kubepods-besteffort-podb67c43f2_520a_4507_8c6e_980ebe6fd384.slice. Jul 9 23:50:59.727466 systemd[1]: Removed slice kubepods-burstable-pod5c5b2cf1_f6bd_424f_bcaa_966201e849e3.slice - libcontainer container kubepods-burstable-pod5c5b2cf1_f6bd_424f_bcaa_966201e849e3.slice. Jul 9 23:50:59.727570 systemd[1]: kubepods-burstable-pod5c5b2cf1_f6bd_424f_bcaa_966201e849e3.slice: Consumed 7.056s CPU time, 123.6M memory peak, 132K read from disk, 15.2M written to disk. Jul 9 23:50:59.923599 kubelet[2627]: I0709 23:50:59.923424 2627 scope.go:117] "RemoveContainer" containerID="206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938" Jul 9 23:50:59.927243 containerd[1496]: time="2025-07-09T23:50:59.927117130Z" level=info msg="RemoveContainer for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\"" Jul 9 23:50:59.954934 containerd[1496]: time="2025-07-09T23:50:59.954880128Z" level=info msg="RemoveContainer for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" returns successfully" Jul 9 23:50:59.955360 kubelet[2627]: I0709 23:50:59.955325 2627 scope.go:117] "RemoveContainer" containerID="fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650" Jul 9 23:50:59.957787 containerd[1496]: time="2025-07-09T23:50:59.957764047Z" level=info msg="RemoveContainer for \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\"" Jul 9 23:50:59.963553 containerd[1496]: time="2025-07-09T23:50:59.963500924Z" level=info msg="RemoveContainer for \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" returns successfully" Jul 9 23:50:59.963836 kubelet[2627]: I0709 23:50:59.963806 2627 scope.go:117] "RemoveContainer" containerID="f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58" Jul 9 23:50:59.966056 containerd[1496]: time="2025-07-09T23:50:59.966029527Z" level=info msg="RemoveContainer for \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\"" Jul 9 23:50:59.969563 containerd[1496]: time="2025-07-09T23:50:59.969537276Z" level=info msg="RemoveContainer for \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" returns successfully" Jul 9 23:50:59.969749 kubelet[2627]: I0709 23:50:59.969722 2627 scope.go:117] "RemoveContainer" containerID="d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70" Jul 9 23:50:59.970904 containerd[1496]: time="2025-07-09T23:50:59.970883497Z" level=info msg="RemoveContainer for \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\"" Jul 9 23:50:59.973504 containerd[1496]: time="2025-07-09T23:50:59.973465460Z" level=info msg="RemoveContainer for \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" returns successfully" Jul 9 23:50:59.973668 kubelet[2627]: I0709 23:50:59.973625 2627 scope.go:117] "RemoveContainer" containerID="efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b" Jul 9 23:50:59.975146 containerd[1496]: time="2025-07-09T23:50:59.975070716Z" level=info msg="RemoveContainer for \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\"" Jul 9 23:50:59.978940 containerd[1496]: time="2025-07-09T23:50:59.978888381Z" level=info msg="RemoveContainer for \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" returns successfully" Jul 9 23:50:59.979252 kubelet[2627]: I0709 23:50:59.979207 2627 scope.go:117] "RemoveContainer" containerID="206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938" Jul 9 23:50:59.979489 containerd[1496]: time="2025-07-09T23:50:59.979426573Z" level=error msg="ContainerStatus for \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\": not found" Jul 9 23:50:59.991247 kubelet[2627]: E0709 23:50:59.991171 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\": not found" containerID="206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938" Jul 9 23:50:59.991352 kubelet[2627]: I0709 23:50:59.991252 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938"} err="failed to get container status \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\": rpc error: code = NotFound desc = an error occurred when try to find container \"206a06c2fd2b74bb3cfa49b62e465f9a40d24824727fed739b9c2322e29a9938\": not found" Jul 9 23:50:59.991352 kubelet[2627]: I0709 23:50:59.991318 2627 scope.go:117] "RemoveContainer" containerID="fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650" Jul 9 23:50:59.991694 containerd[1496]: time="2025-07-09T23:50:59.991644077Z" level=error msg="ContainerStatus for \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\": not found" Jul 9 23:50:59.991849 kubelet[2627]: E0709 23:50:59.991816 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\": not found" containerID="fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650" Jul 9 23:50:59.991898 kubelet[2627]: I0709 23:50:59.991848 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650"} err="failed to get container status \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\": rpc error: code = NotFound desc = an error occurred when try to find container \"fb8f270f3042bafa87288fa60a6452a4684efa2c5c4e9c767a5c9e28fcfd0650\": not found" Jul 9 23:50:59.991898 kubelet[2627]: I0709 23:50:59.991866 2627 scope.go:117] "RemoveContainer" containerID="f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58" Jul 9 23:50:59.992089 containerd[1496]: time="2025-07-09T23:50:59.992045951Z" level=error msg="ContainerStatus for \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\": not found" Jul 9 23:50:59.992202 kubelet[2627]: E0709 23:50:59.992181 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\": not found" containerID="f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58" Jul 9 23:50:59.992238 kubelet[2627]: I0709 23:50:59.992207 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58"} err="failed to get container status \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1ab4009344736a8d41a3d17d9e8ddf3ef65b6c74c7cf518ca12f3f430e08c58\": not found" Jul 9 23:50:59.992238 kubelet[2627]: I0709 23:50:59.992224 2627 scope.go:117] "RemoveContainer" containerID="d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70" Jul 9 23:50:59.992510 containerd[1496]: time="2025-07-09T23:50:59.992389226Z" level=error msg="ContainerStatus for \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\": not found" Jul 9 23:50:59.992679 kubelet[2627]: E0709 23:50:59.992638 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\": not found" containerID="d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70" Jul 9 23:50:59.992679 kubelet[2627]: I0709 23:50:59.992664 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70"} err="failed to get container status \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\": rpc error: code = NotFound desc = an error occurred when try to find container \"d462cee3f375455f06fb3a1ccd012aa8b0f5957ec5ac936381777050c8654c70\": not found" Jul 9 23:50:59.992679 kubelet[2627]: I0709 23:50:59.992678 2627 scope.go:117] "RemoveContainer" containerID="efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b" Jul 9 23:50:59.992967 containerd[1496]: time="2025-07-09T23:50:59.992918338Z" level=error msg="ContainerStatus for \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\": not found" Jul 9 23:50:59.993114 kubelet[2627]: E0709 23:50:59.993070 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\": not found" containerID="efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b" Jul 9 23:50:59.993143 kubelet[2627]: I0709 23:50:59.993089 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b"} err="failed to get container status \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"efd5398d4257a23f0cfafe4ca253efb4668f69552645d8c6f0498bcf45b94d8b\": not found" Jul 9 23:50:59.993143 kubelet[2627]: I0709 23:50:59.993132 2627 scope.go:117] "RemoveContainer" containerID="603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3" Jul 9 23:50:59.994673 containerd[1496]: time="2025-07-09T23:50:59.994644193Z" level=info msg="RemoveContainer for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\"" Jul 9 23:50:59.997774 containerd[1496]: time="2025-07-09T23:50:59.997722589Z" level=info msg="RemoveContainer for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" returns successfully" Jul 9 23:50:59.997954 kubelet[2627]: I0709 23:50:59.997898 2627 scope.go:117] "RemoveContainer" containerID="603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3" Jul 9 23:50:59.998311 containerd[1496]: time="2025-07-09T23:50:59.998273981Z" level=error msg="ContainerStatus for \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\": not found" Jul 9 23:50:59.998620 kubelet[2627]: E0709 23:50:59.998566 2627 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\": not found" containerID="603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3" Jul 9 23:50:59.998620 kubelet[2627]: I0709 23:50:59.998595 2627 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3"} err="failed to get container status \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"603a03cb75832816b33da55063706947fa1ff8477279612e8a9b7166add513c3\": not found" Jul 9 23:51:00.169718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205-shm.mount: Deactivated successfully. Jul 9 23:51:00.169810 systemd[1]: var-lib-kubelet-pods-b67c43f2\x2d520a\x2d4507\x2d8c6e\x2d980ebe6fd384-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhfjs.mount: Deactivated successfully. Jul 9 23:51:00.169862 systemd[1]: var-lib-kubelet-pods-5c5b2cf1\x2df6bd\x2d424f\x2dbcaa\x2d966201e849e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hvvq.mount: Deactivated successfully. Jul 9 23:51:00.169918 systemd[1]: var-lib-kubelet-pods-5c5b2cf1\x2df6bd\x2d424f\x2dbcaa\x2d966201e849e3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 23:51:00.169965 systemd[1]: var-lib-kubelet-pods-5c5b2cf1\x2df6bd\x2d424f\x2dbcaa\x2d966201e849e3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 23:51:00.838035 containerd[1496]: time="2025-07-09T23:51:00.837964194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" id:\"3999357011909713406265d1c1098a74743fa6e57a3453f9e62c935788b17205\" pid:2866 exit_status:137 exited_at:{seconds:1752105059 nanos:222736232}" Jul 9 23:51:01.082612 sshd[4218]: Connection closed by 10.0.0.1 port 50354 Jul 9 23:51:01.083635 sshd-session[4216]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:01.093415 systemd[1]: sshd@21-10.0.0.74:22-10.0.0.1:50354.service: Deactivated successfully. Jul 9 23:51:01.095286 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 23:51:01.095561 systemd[1]: session-22.scope: Consumed 1.309s CPU time, 24M memory peak. Jul 9 23:51:01.096104 systemd-logind[1479]: Session 22 logged out. Waiting for processes to exit. Jul 9 23:51:01.098622 systemd[1]: Started sshd@22-10.0.0.74:22-10.0.0.1:50368.service - OpenSSH per-connection server daemon (10.0.0.1:50368). Jul 9 23:51:01.099290 systemd-logind[1479]: Removed session 22. Jul 9 23:51:01.156614 sshd[4371]: Accepted publickey for core from 10.0.0.1 port 50368 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:51:01.157997 sshd-session[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:01.162109 systemd-logind[1479]: New session 23 of user core. Jul 9 23:51:01.166590 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 23:51:01.726419 kubelet[2627]: I0709 23:51:01.726377 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5b2cf1-f6bd-424f-bcaa-966201e849e3" path="/var/lib/kubelet/pods/5c5b2cf1-f6bd-424f-bcaa-966201e849e3/volumes" Jul 9 23:51:01.726926 kubelet[2627]: I0709 23:51:01.726901 2627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b67c43f2-520a-4507-8c6e-980ebe6fd384" path="/var/lib/kubelet/pods/b67c43f2-520a-4507-8c6e-980ebe6fd384/volumes" Jul 9 23:51:02.269992 sshd[4373]: Connection closed by 10.0.0.1 port 50368 Jul 9 23:51:02.269896 sshd-session[4371]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:02.284787 systemd[1]: sshd@22-10.0.0.74:22-10.0.0.1:50368.service: Deactivated successfully. Jul 9 23:51:02.286362 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 23:51:02.293554 systemd[1]: session-23.scope: Consumed 1.019s CPU time, 26.4M memory peak. Jul 9 23:51:02.295724 systemd-logind[1479]: Session 23 logged out. Waiting for processes to exit. Jul 9 23:51:02.309758 systemd[1]: Started sshd@23-10.0.0.74:22-10.0.0.1:50372.service - OpenSSH per-connection server daemon (10.0.0.1:50372). Jul 9 23:51:02.310307 systemd-logind[1479]: Removed session 23. Jul 9 23:51:02.326476 systemd[1]: Created slice kubepods-burstable-pod0fb0d075_44e8_4d46_8d7d_c66a018d9db3.slice - libcontainer container kubepods-burstable-pod0fb0d075_44e8_4d46_8d7d_c66a018d9db3.slice. Jul 9 23:51:02.388220 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 50372 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:51:02.389740 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:02.393612 kubelet[2627]: I0709 23:51:02.393566 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-cni-path\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393612 kubelet[2627]: I0709 23:51:02.393609 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-host-proc-sys-net\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393795 kubelet[2627]: I0709 23:51:02.393633 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwtvn\" (UniqueName: \"kubernetes.io/projected/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-kube-api-access-kwtvn\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393795 kubelet[2627]: I0709 23:51:02.393650 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-cilium-ipsec-secrets\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393795 kubelet[2627]: I0709 23:51:02.393665 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-host-proc-sys-kernel\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393795 kubelet[2627]: I0709 23:51:02.393679 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-cilium-cgroup\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393795 kubelet[2627]: I0709 23:51:02.393694 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-xtables-lock\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393902 kubelet[2627]: I0709 23:51:02.393710 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-etc-cni-netd\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393902 kubelet[2627]: I0709 23:51:02.393723 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-hubble-tls\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393902 kubelet[2627]: I0709 23:51:02.393740 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-cilium-run\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393902 kubelet[2627]: I0709 23:51:02.393757 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-cilium-config-path\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393902 kubelet[2627]: I0709 23:51:02.393771 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-bpf-maps\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.393902 kubelet[2627]: I0709 23:51:02.393786 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-hostproc\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.394041 kubelet[2627]: I0709 23:51:02.393799 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-lib-modules\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.394041 kubelet[2627]: I0709 23:51:02.393814 2627 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0fb0d075-44e8-4d46-8d7d-c66a018d9db3-clustermesh-secrets\") pod \"cilium-7dgqc\" (UID: \"0fb0d075-44e8-4d46-8d7d-c66a018d9db3\") " pod="kube-system/cilium-7dgqc" Jul 9 23:51:02.394278 systemd-logind[1479]: New session 24 of user core. Jul 9 23:51:02.408674 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 23:51:02.458831 sshd[4387]: Connection closed by 10.0.0.1 port 50372 Jul 9 23:51:02.459321 sshd-session[4385]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:02.470728 systemd[1]: sshd@23-10.0.0.74:22-10.0.0.1:50372.service: Deactivated successfully. Jul 9 23:51:02.473112 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 23:51:02.473909 systemd-logind[1479]: Session 24 logged out. Waiting for processes to exit. Jul 9 23:51:02.477295 systemd[1]: Started sshd@24-10.0.0.74:22-10.0.0.1:51260.service - OpenSSH per-connection server daemon (10.0.0.1:51260). Jul 9 23:51:02.479326 systemd-logind[1479]: Removed session 24. Jul 9 23:51:02.541946 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 51260 ssh2: RSA SHA256:gbh5fzx9ySCIDkMghehbh4e/pZN4DLj+F4FnNMgMZq8 Jul 9 23:51:02.543190 sshd-session[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 23:51:02.547211 systemd-logind[1479]: New session 25 of user core. Jul 9 23:51:02.557680 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 23:51:02.634945 kubelet[2627]: E0709 23:51:02.634866 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:02.635405 containerd[1496]: time="2025-07-09T23:51:02.635368302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dgqc,Uid:0fb0d075-44e8-4d46-8d7d-c66a018d9db3,Namespace:kube-system,Attempt:0,}" Jul 9 23:51:02.662094 containerd[1496]: time="2025-07-09T23:51:02.662027702Z" level=info msg="connecting to shim 0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1" address="unix:///run/containerd/s/5b4504f33598248f14c42b8a5b86e0b0f45b31e3165119c04fc560779d891f55" namespace=k8s.io protocol=ttrpc version=3 Jul 9 23:51:02.695673 systemd[1]: Started cri-containerd-0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1.scope - libcontainer container 0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1. Jul 9 23:51:02.718992 containerd[1496]: time="2025-07-09T23:51:02.718940893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7dgqc,Uid:0fb0d075-44e8-4d46-8d7d-c66a018d9db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\"" Jul 9 23:51:02.719921 kubelet[2627]: E0709 23:51:02.719852 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:02.726340 containerd[1496]: time="2025-07-09T23:51:02.726296953Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 23:51:02.734558 containerd[1496]: time="2025-07-09T23:51:02.734505162Z" level=info msg="Container 46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:51:02.741327 containerd[1496]: time="2025-07-09T23:51:02.741274671Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\"" Jul 9 23:51:02.742066 containerd[1496]: time="2025-07-09T23:51:02.742044260Z" level=info msg="StartContainer for \"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\"" Jul 9 23:51:02.743119 containerd[1496]: time="2025-07-09T23:51:02.743088126Z" level=info msg="connecting to shim 46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f" address="unix:///run/containerd/s/5b4504f33598248f14c42b8a5b86e0b0f45b31e3165119c04fc560779d891f55" protocol=ttrpc version=3 Jul 9 23:51:02.769636 systemd[1]: Started cri-containerd-46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f.scope - libcontainer container 46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f. Jul 9 23:51:02.790958 kubelet[2627]: E0709 23:51:02.790903 2627 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 23:51:02.802018 containerd[1496]: time="2025-07-09T23:51:02.801912051Z" level=info msg="StartContainer for \"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\" returns successfully" Jul 9 23:51:02.821783 systemd[1]: cri-containerd-46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f.scope: Deactivated successfully. Jul 9 23:51:02.823731 containerd[1496]: time="2025-07-09T23:51:02.823456400Z" level=info msg="received exit event container_id:\"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\" id:\"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\" pid:4464 exited_at:{seconds:1752105062 nanos:823187684}" Jul 9 23:51:02.823731 containerd[1496]: time="2025-07-09T23:51:02.823645158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\" id:\"46ef3d51a26fe89173fbee5e0af061e566331a63a68b9ded2619a3647b13e99f\" pid:4464 exited_at:{seconds:1752105062 nanos:823187684}" Jul 9 23:51:02.937071 kubelet[2627]: E0709 23:51:02.937034 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:02.942263 containerd[1496]: time="2025-07-09T23:51:02.942213276Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 23:51:02.951086 containerd[1496]: time="2025-07-09T23:51:02.950919398Z" level=info msg="Container c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:51:02.956780 containerd[1496]: time="2025-07-09T23:51:02.956727440Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\"" Jul 9 23:51:02.957348 containerd[1496]: time="2025-07-09T23:51:02.957310712Z" level=info msg="StartContainer for \"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\"" Jul 9 23:51:02.959693 containerd[1496]: time="2025-07-09T23:51:02.959413963Z" level=info msg="connecting to shim c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87" address="unix:///run/containerd/s/5b4504f33598248f14c42b8a5b86e0b0f45b31e3165119c04fc560779d891f55" protocol=ttrpc version=3 Jul 9 23:51:02.996644 systemd[1]: Started cri-containerd-c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87.scope - libcontainer container c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87. Jul 9 23:51:03.030408 systemd[1]: cri-containerd-c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87.scope: Deactivated successfully. Jul 9 23:51:03.031913 containerd[1496]: time="2025-07-09T23:51:03.031866713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\" id:\"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\" pid:4509 exited_at:{seconds:1752105063 nanos:31565117}" Jul 9 23:51:03.049514 containerd[1496]: time="2025-07-09T23:51:03.049388282Z" level=info msg="received exit event container_id:\"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\" id:\"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\" pid:4509 exited_at:{seconds:1752105063 nanos:31565117}" Jul 9 23:51:03.050660 containerd[1496]: time="2025-07-09T23:51:03.050631065Z" level=info msg="StartContainer for \"c2dac0e37789cc33758fa0167d4e6a82ef098d4abbd2a270717da933dd696f87\" returns successfully" Jul 9 23:51:03.717793 kubelet[2627]: E0709 23:51:03.717756 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:03.943028 kubelet[2627]: E0709 23:51:03.942888 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:03.949012 containerd[1496]: time="2025-07-09T23:51:03.948931912Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 23:51:03.964803 containerd[1496]: time="2025-07-09T23:51:03.963884955Z" level=info msg="Container 2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:51:03.973882 containerd[1496]: time="2025-07-09T23:51:03.972858396Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\"" Jul 9 23:51:03.974759 containerd[1496]: time="2025-07-09T23:51:03.974723931Z" level=info msg="StartContainer for \"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\"" Jul 9 23:51:03.978159 containerd[1496]: time="2025-07-09T23:51:03.978118486Z" level=info msg="connecting to shim 2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660" address="unix:///run/containerd/s/5b4504f33598248f14c42b8a5b86e0b0f45b31e3165119c04fc560779d891f55" protocol=ttrpc version=3 Jul 9 23:51:03.998679 systemd[1]: Started cri-containerd-2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660.scope - libcontainer container 2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660. Jul 9 23:51:04.036772 systemd[1]: cri-containerd-2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660.scope: Deactivated successfully. Jul 9 23:51:04.038371 containerd[1496]: time="2025-07-09T23:51:04.038026265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\" id:\"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\" pid:4552 exited_at:{seconds:1752105064 nanos:37417633}" Jul 9 23:51:04.038662 containerd[1496]: time="2025-07-09T23:51:04.038634897Z" level=info msg="received exit event container_id:\"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\" id:\"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\" pid:4552 exited_at:{seconds:1752105064 nanos:37417633}" Jul 9 23:51:04.048502 containerd[1496]: time="2025-07-09T23:51:04.048465130Z" level=info msg="StartContainer for \"2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660\" returns successfully" Jul 9 23:51:04.062770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f5be0a4a776d6fc957c902cb786ea4175eb0363a09fa4dbc6f1d322afc5d660-rootfs.mount: Deactivated successfully. Jul 9 23:51:04.950866 kubelet[2627]: E0709 23:51:04.950777 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:04.958564 containerd[1496]: time="2025-07-09T23:51:04.956367870Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 23:51:04.970730 containerd[1496]: time="2025-07-09T23:51:04.970640405Z" level=info msg="Container c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:51:04.991536 containerd[1496]: time="2025-07-09T23:51:04.991473096Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\"" Jul 9 23:51:04.992340 containerd[1496]: time="2025-07-09T23:51:04.992311565Z" level=info msg="StartContainer for \"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\"" Jul 9 23:51:04.993356 containerd[1496]: time="2025-07-09T23:51:04.993324312Z" level=info msg="connecting to shim c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e" address="unix:///run/containerd/s/5b4504f33598248f14c42b8a5b86e0b0f45b31e3165119c04fc560779d891f55" protocol=ttrpc version=3 Jul 9 23:51:05.018649 systemd[1]: Started cri-containerd-c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e.scope - libcontainer container c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e. Jul 9 23:51:05.055314 systemd[1]: cri-containerd-c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e.scope: Deactivated successfully. Jul 9 23:51:05.057302 containerd[1496]: time="2025-07-09T23:51:05.057255340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\" id:\"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\" pid:4591 exited_at:{seconds:1752105065 nanos:56861745}" Jul 9 23:51:05.059408 containerd[1496]: time="2025-07-09T23:51:05.059229875Z" level=info msg="received exit event container_id:\"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\" id:\"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\" pid:4591 exited_at:{seconds:1752105065 nanos:56861745}" Jul 9 23:51:05.066465 containerd[1496]: time="2025-07-09T23:51:05.066412545Z" level=info msg="StartContainer for \"c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e\" returns successfully" Jul 9 23:51:05.081185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9bcedc2b29ddbeb4a37e2eb34f61bd8d7bb703e02db89fd586e3fd9369a6b9e-rootfs.mount: Deactivated successfully. Jul 9 23:51:05.967762 kubelet[2627]: E0709 23:51:05.965734 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:05.975464 containerd[1496]: time="2025-07-09T23:51:05.975106806Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 23:51:05.986170 containerd[1496]: time="2025-07-09T23:51:05.986116706Z" level=info msg="Container 9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1: CDI devices from CRI Config.CDIDevices: []" Jul 9 23:51:05.996110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793202089.mount: Deactivated successfully. Jul 9 23:51:05.998362 containerd[1496]: time="2025-07-09T23:51:05.998323112Z" level=info msg="CreateContainer within sandbox \"0019b2dc603df42c5d1a725a1b733496627922540febb8635d26b754a62d4ff1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\"" Jul 9 23:51:06.000483 containerd[1496]: time="2025-07-09T23:51:05.999666855Z" level=info msg="StartContainer for \"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\"" Jul 9 23:51:06.001231 containerd[1496]: time="2025-07-09T23:51:06.001193956Z" level=info msg="connecting to shim 9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1" address="unix:///run/containerd/s/5b4504f33598248f14c42b8a5b86e0b0f45b31e3165119c04fc560779d891f55" protocol=ttrpc version=3 Jul 9 23:51:06.024695 systemd[1]: Started cri-containerd-9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1.scope - libcontainer container 9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1. Jul 9 23:51:06.059457 containerd[1496]: time="2025-07-09T23:51:06.059403355Z" level=info msg="StartContainer for \"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\" returns successfully" Jul 9 23:51:06.127511 containerd[1496]: time="2025-07-09T23:51:06.126711081Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\" id:\"22160130fe6bb2fafd7451ee99dddb35128afe1cee7d174d9c28331a169cd262\" pid:4661 exited_at:{seconds:1752105066 nanos:122662811}" Jul 9 23:51:06.375481 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 23:51:06.972445 kubelet[2627]: E0709 23:51:06.972379 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:06.993446 kubelet[2627]: I0709 23:51:06.993337 2627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7dgqc" podStartSLOduration=4.993320947 podStartE2EDuration="4.993320947s" podCreationTimestamp="2025-07-09 23:51:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 23:51:06.992001363 +0000 UTC m=+79.421469454" watchObservedRunningTime="2025-07-09 23:51:06.993320947 +0000 UTC m=+79.422789038" Jul 9 23:51:08.636044 kubelet[2627]: E0709 23:51:08.635950 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:08.960609 containerd[1496]: time="2025-07-09T23:51:08.960531092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\" id:\"8f32e71fbf6301b6b6a823fee395bbf23f52bb666c99e84b109409c258bb1e9c\" pid:5064 exit_status:1 exited_at:{seconds:1752105068 nanos:960014458}" Jul 9 23:51:09.417116 systemd-networkd[1422]: lxc_health: Link UP Jul 9 23:51:09.417340 systemd-networkd[1422]: lxc_health: Gained carrier Jul 9 23:51:10.637207 kubelet[2627]: E0709 23:51:10.637124 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:10.987525 kubelet[2627]: E0709 23:51:10.987472 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:10.992639 systemd-networkd[1422]: lxc_health: Gained IPv6LL Jul 9 23:51:11.102785 containerd[1496]: time="2025-07-09T23:51:11.102646531Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\" id:\"d22a28635719e20f72dba4d964ac6615f8e464fbb13a67963144f91cc77d84e9\" pid:5202 exited_at:{seconds:1752105071 nanos:101749381}" Jul 9 23:51:11.718996 kubelet[2627]: E0709 23:51:11.718960 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:11.989417 kubelet[2627]: E0709 23:51:11.989301 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 23:51:13.345907 containerd[1496]: time="2025-07-09T23:51:13.345613140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\" id:\"60c1dc6f72de3c6b9b0afce7020ed53e516fc7966dd6d576811f4184d7e0a9fe\" pid:5234 exited_at:{seconds:1752105073 nanos:343761440}" Jul 9 23:51:15.490823 containerd[1496]: time="2025-07-09T23:51:15.490771413Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bba783f2647acc377b3dee241a0347588df70e202a0f3f8085756b9e0ff11a1\" id:\"0e74897b3bf37d8ffa8e4b512b02611b5f11a2e02f9a9a5f99751b14a235d3b3\" pid:5260 exited_at:{seconds:1752105075 nanos:490497136}" Jul 9 23:51:15.496071 sshd[4400]: Connection closed by 10.0.0.1 port 51260 Jul 9 23:51:15.497764 sshd-session[4394]: pam_unix(sshd:session): session closed for user core Jul 9 23:51:15.501614 systemd[1]: sshd@24-10.0.0.74:22-10.0.0.1:51260.service: Deactivated successfully. Jul 9 23:51:15.503886 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 23:51:15.504822 systemd-logind[1479]: Session 25 logged out. Waiting for processes to exit. Jul 9 23:51:15.506605 systemd-logind[1479]: Removed session 25. Jul 9 23:51:16.718461 kubelet[2627]: E0709 23:51:16.718043 2627 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"