Jul 10 23:39:14.824404 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 23:39:14.824423 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu Jul 10 22:17:59 -00 2025 Jul 10 23:39:14.824433 kernel: KASLR enabled Jul 10 23:39:14.824438 kernel: efi: EFI v2.7 by EDK II Jul 10 23:39:14.824444 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 10 23:39:14.824449 kernel: random: crng init done Jul 10 23:39:14.824456 kernel: secureboot: Secure boot disabled Jul 10 23:39:14.824461 kernel: ACPI: Early table checksum verification disabled Jul 10 23:39:14.824467 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 10 23:39:14.824474 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 23:39:14.824480 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824485 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824491 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824497 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824504 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824511 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824518 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824524 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824530 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 23:39:14.824535 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 23:39:14.824541 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 10 23:39:14.824547 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:39:14.824553 kernel: NODE_DATA(0) allocated [mem 0xdc964dc0-0xdc96bfff] Jul 10 23:39:14.824559 kernel: Zone ranges: Jul 10 23:39:14.824565 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:39:14.824572 kernel: DMA32 empty Jul 10 23:39:14.824578 kernel: Normal empty Jul 10 23:39:14.824584 kernel: Device empty Jul 10 23:39:14.824590 kernel: Movable zone start for each node Jul 10 23:39:14.824596 kernel: Early memory node ranges Jul 10 23:39:14.824602 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 10 23:39:14.824607 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 10 23:39:14.824614 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 10 23:39:14.824619 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 10 23:39:14.824625 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 10 23:39:14.824631 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 10 23:39:14.824637 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 10 23:39:14.824645 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 10 23:39:14.824651 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 10 23:39:14.824657 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 23:39:14.824666 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 23:39:14.824672 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 23:39:14.824679 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 23:39:14.824687 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 23:39:14.824693 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 23:39:14.824700 kernel: psci: probing for conduit method from ACPI. Jul 10 23:39:14.824706 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 23:39:14.824712 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 23:39:14.824718 kernel: psci: Trusted OS migration not required Jul 10 23:39:14.824725 kernel: psci: SMC Calling Convention v1.1 Jul 10 23:39:14.824731 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 23:39:14.824737 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 10 23:39:14.824744 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 10 23:39:14.824752 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 23:39:14.824758 kernel: Detected PIPT I-cache on CPU0 Jul 10 23:39:14.824764 kernel: CPU features: detected: GIC system register CPU interface Jul 10 23:39:14.824771 kernel: CPU features: detected: Spectre-v4 Jul 10 23:39:14.824778 kernel: CPU features: detected: Spectre-BHB Jul 10 23:39:14.824784 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 23:39:14.824791 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 23:39:14.824806 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 23:39:14.824832 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 23:39:14.824838 kernel: alternatives: applying boot alternatives Jul 10 23:39:14.824845 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9ae0b1f40710648305be8f7e436b6937e65ac0b33eb84d1b5b7411684b4e7538 Jul 10 23:39:14.824855 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 23:39:14.824903 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 23:39:14.824912 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 23:39:14.824919 kernel: Fallback order for Node 0: 0 Jul 10 23:39:14.824928 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 10 23:39:14.824938 kernel: Policy zone: DMA Jul 10 23:39:14.824948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 23:39:14.824955 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 10 23:39:14.824961 kernel: software IO TLB: area num 4. Jul 10 23:39:14.824967 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 10 23:39:14.824974 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) Jul 10 23:39:14.824980 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 23:39:14.824989 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 23:39:14.824996 kernel: rcu: RCU event tracing is enabled. Jul 10 23:39:14.825003 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 23:39:14.825010 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 23:39:14.825016 kernel: Tracing variant of Tasks RCU enabled. Jul 10 23:39:14.825023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 23:39:14.825030 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 23:39:14.825036 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 23:39:14.825043 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 23:39:14.825050 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 23:39:14.825056 kernel: GICv3: 256 SPIs implemented Jul 10 23:39:14.825064 kernel: GICv3: 0 Extended SPIs implemented Jul 10 23:39:14.825071 kernel: Root IRQ handler: gic_handle_irq Jul 10 23:39:14.825077 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 23:39:14.825084 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 10 23:39:14.825091 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 23:39:14.825097 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 23:39:14.825104 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 10 23:39:14.825111 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 10 23:39:14.825117 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 10 23:39:14.825124 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 10 23:39:14.825132 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 23:39:14.825144 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:39:14.825155 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 23:39:14.825162 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 23:39:14.825169 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 23:39:14.825175 kernel: arm-pv: using stolen time PV Jul 10 23:39:14.825183 kernel: Console: colour dummy device 80x25 Jul 10 23:39:14.825189 kernel: ACPI: Core revision 20240827 Jul 10 23:39:14.825196 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 23:39:14.825203 kernel: pid_max: default: 32768 minimum: 301 Jul 10 23:39:14.825210 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 10 23:39:14.825217 kernel: landlock: Up and running. Jul 10 23:39:14.825227 kernel: SELinux: Initializing. Jul 10 23:39:14.825234 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:39:14.825241 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 23:39:14.825247 kernel: rcu: Hierarchical SRCU implementation. Jul 10 23:39:14.825254 kernel: rcu: Max phase no-delay instances is 400. Jul 10 23:39:14.825261 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 10 23:39:14.825268 kernel: Remapping and enabling EFI services. Jul 10 23:39:14.825274 kernel: smp: Bringing up secondary CPUs ... Jul 10 23:39:14.825281 kernel: Detected PIPT I-cache on CPU1 Jul 10 23:39:14.825297 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 23:39:14.825304 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 10 23:39:14.825313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:39:14.825320 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 23:39:14.825327 kernel: Detected PIPT I-cache on CPU2 Jul 10 23:39:14.825334 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 23:39:14.825341 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 10 23:39:14.825350 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:39:14.825357 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 23:39:14.825363 kernel: Detected PIPT I-cache on CPU3 Jul 10 23:39:14.825370 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 23:39:14.825377 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 10 23:39:14.825385 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 23:39:14.825391 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 23:39:14.825398 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 23:39:14.825405 kernel: SMP: Total of 4 processors activated. Jul 10 23:39:14.825412 kernel: CPU: All CPU(s) started at EL1 Jul 10 23:39:14.825420 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 23:39:14.825427 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 23:39:14.825434 kernel: CPU features: detected: Common not Private translations Jul 10 23:39:14.825441 kernel: CPU features: detected: CRC32 instructions Jul 10 23:39:14.825448 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 23:39:14.825455 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 23:39:14.825462 kernel: CPU features: detected: LSE atomic instructions Jul 10 23:39:14.825469 kernel: CPU features: detected: Privileged Access Never Jul 10 23:39:14.825476 kernel: CPU features: detected: RAS Extension Support Jul 10 23:39:14.825485 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 23:39:14.825492 kernel: alternatives: applying system-wide alternatives Jul 10 23:39:14.825499 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 10 23:39:14.825506 kernel: Memory: 2440416K/2572288K available (11136K kernel code, 2428K rwdata, 9032K rodata, 39488K init, 1035K bss, 125924K reserved, 0K cma-reserved) Jul 10 23:39:14.825513 kernel: devtmpfs: initialized Jul 10 23:39:14.825520 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 23:39:14.825527 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 23:39:14.825534 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 23:39:14.825541 kernel: 0 pages in range for non-PLT usage Jul 10 23:39:14.825550 kernel: 508448 pages in range for PLT usage Jul 10 23:39:14.825557 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 23:39:14.825566 kernel: SMBIOS 3.0.0 present. Jul 10 23:39:14.825573 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 10 23:39:14.825580 kernel: DMI: Memory slots populated: 1/1 Jul 10 23:39:14.825587 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 23:39:14.825594 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 23:39:14.825601 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 23:39:14.825608 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 23:39:14.825617 kernel: audit: initializing netlink subsys (disabled) Jul 10 23:39:14.825624 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 10 23:39:14.825636 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 23:39:14.825643 kernel: cpuidle: using governor menu Jul 10 23:39:14.825650 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 23:39:14.825657 kernel: ASID allocator initialised with 32768 entries Jul 10 23:39:14.825666 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 23:39:14.825673 kernel: Serial: AMBA PL011 UART driver Jul 10 23:39:14.825680 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 23:39:14.825688 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 23:39:14.825695 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 23:39:14.825702 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 23:39:14.825709 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 23:39:14.825716 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 23:39:14.825723 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 23:39:14.825730 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 23:39:14.825737 kernel: ACPI: Added _OSI(Module Device) Jul 10 23:39:14.825744 kernel: ACPI: Added _OSI(Processor Device) Jul 10 23:39:14.825752 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 23:39:14.825759 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 23:39:14.825766 kernel: ACPI: Interpreter enabled Jul 10 23:39:14.825773 kernel: ACPI: Using GIC for interrupt routing Jul 10 23:39:14.825780 kernel: ACPI: MCFG table detected, 1 entries Jul 10 23:39:14.825787 kernel: ACPI: CPU0 has been hot-added Jul 10 23:39:14.825799 kernel: ACPI: CPU1 has been hot-added Jul 10 23:39:14.825806 kernel: ACPI: CPU2 has been hot-added Jul 10 23:39:14.825821 kernel: ACPI: CPU3 has been hot-added Jul 10 23:39:14.825829 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 23:39:14.825837 kernel: printk: legacy console [ttyAMA0] enabled Jul 10 23:39:14.825844 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 23:39:14.826008 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 23:39:14.826082 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 23:39:14.826145 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 23:39:14.826213 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 23:39:14.826276 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 23:39:14.826288 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 23:39:14.826295 kernel: PCI host bridge to bus 0000:00 Jul 10 23:39:14.826363 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 23:39:14.826422 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 23:39:14.826477 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 23:39:14.826531 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 23:39:14.826607 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 10 23:39:14.826693 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 10 23:39:14.826758 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 10 23:39:14.826918 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 10 23:39:14.826987 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 23:39:14.827048 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 10 23:39:14.827110 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 10 23:39:14.827175 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 10 23:39:14.827238 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 23:39:14.827304 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 23:39:14.827368 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 23:39:14.827380 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 23:39:14.827388 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 23:39:14.827395 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 23:39:14.827402 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 23:39:14.827411 kernel: iommu: Default domain type: Translated Jul 10 23:39:14.827419 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 23:39:14.827428 kernel: efivars: Registered efivars operations Jul 10 23:39:14.827435 kernel: vgaarb: loaded Jul 10 23:39:14.827444 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 23:39:14.827451 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 23:39:14.827459 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 23:39:14.827466 kernel: pnp: PnP ACPI init Jul 10 23:39:14.828483 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 23:39:14.828499 kernel: pnp: PnP ACPI: found 1 devices Jul 10 23:39:14.828506 kernel: NET: Registered PF_INET protocol family Jul 10 23:39:14.828513 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 23:39:14.828526 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 23:39:14.828534 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 23:39:14.828541 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 23:39:14.828548 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 23:39:14.828555 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 23:39:14.828569 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:39:14.828576 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 23:39:14.828585 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 23:39:14.828594 kernel: PCI: CLS 0 bytes, default 64 Jul 10 23:39:14.828602 kernel: kvm [1]: HYP mode not available Jul 10 23:39:14.828611 kernel: Initialise system trusted keyrings Jul 10 23:39:14.828618 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 23:39:14.828627 kernel: Key type asymmetric registered Jul 10 23:39:14.828638 kernel: Asymmetric key parser 'x509' registered Jul 10 23:39:14.828646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 23:39:14.828653 kernel: io scheduler mq-deadline registered Jul 10 23:39:14.828660 kernel: io scheduler kyber registered Jul 10 23:39:14.828667 kernel: io scheduler bfq registered Jul 10 23:39:14.828674 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 23:39:14.828683 kernel: ACPI: button: Power Button [PWRB] Jul 10 23:39:14.828690 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 23:39:14.828767 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 23:39:14.828778 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 23:39:14.828787 kernel: thunder_xcv, ver 1.0 Jul 10 23:39:14.828799 kernel: thunder_bgx, ver 1.0 Jul 10 23:39:14.828807 kernel: nicpf, ver 1.0 Jul 10 23:39:14.828823 kernel: nicvf, ver 1.0 Jul 10 23:39:14.828902 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 23:39:14.828967 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T23:39:14 UTC (1752190754) Jul 10 23:39:14.828977 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 23:39:14.828984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 10 23:39:14.828994 kernel: watchdog: NMI not fully supported Jul 10 23:39:14.829001 kernel: watchdog: Hard watchdog permanently disabled Jul 10 23:39:14.829007 kernel: NET: Registered PF_INET6 protocol family Jul 10 23:39:14.829015 kernel: Segment Routing with IPv6 Jul 10 23:39:14.829021 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 23:39:14.829028 kernel: NET: Registered PF_PACKET protocol family Jul 10 23:39:14.829035 kernel: Key type dns_resolver registered Jul 10 23:39:14.829042 kernel: registered taskstats version 1 Jul 10 23:39:14.829049 kernel: Loading compiled-in X.509 certificates Jul 10 23:39:14.829055 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 0718d62a7a0702c0da490764fdc6ec06d7382bc1' Jul 10 23:39:14.829064 kernel: Demotion targets for Node 0: null Jul 10 23:39:14.829070 kernel: Key type .fscrypt registered Jul 10 23:39:14.829081 kernel: Key type fscrypt-provisioning registered Jul 10 23:39:14.829088 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 23:39:14.829094 kernel: ima: Allocated hash algorithm: sha1 Jul 10 23:39:14.829101 kernel: ima: No architecture policies found Jul 10 23:39:14.829108 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 23:39:14.829115 kernel: clk: Disabling unused clocks Jul 10 23:39:14.829123 kernel: PM: genpd: Disabling unused power domains Jul 10 23:39:14.829130 kernel: Warning: unable to open an initial console. Jul 10 23:39:14.829137 kernel: Freeing unused kernel memory: 39488K Jul 10 23:39:14.829143 kernel: Run /init as init process Jul 10 23:39:14.829150 kernel: with arguments: Jul 10 23:39:14.829157 kernel: /init Jul 10 23:39:14.829163 kernel: with environment: Jul 10 23:39:14.829170 kernel: HOME=/ Jul 10 23:39:14.829177 kernel: TERM=linux Jul 10 23:39:14.829185 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 23:39:14.829193 systemd[1]: Successfully made /usr/ read-only. Jul 10 23:39:14.829202 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:39:14.829210 systemd[1]: Detected virtualization kvm. Jul 10 23:39:14.829217 systemd[1]: Detected architecture arm64. Jul 10 23:39:14.829226 systemd[1]: Running in initrd. Jul 10 23:39:14.829234 systemd[1]: No hostname configured, using default hostname. Jul 10 23:39:14.829243 systemd[1]: Hostname set to . Jul 10 23:39:14.829250 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:39:14.829257 systemd[1]: Queued start job for default target initrd.target. Jul 10 23:39:14.829265 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:39:14.829272 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:39:14.829280 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 23:39:14.829292 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:39:14.829300 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 23:39:14.829309 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 23:39:14.829318 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 23:39:14.829326 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 23:39:14.829333 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:39:14.829341 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:39:14.829348 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:39:14.829355 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:39:14.829364 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:39:14.829372 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:39:14.829379 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:39:14.829386 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:39:14.829394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 23:39:14.829401 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 10 23:39:14.829409 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:39:14.829416 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:39:14.829424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:39:14.829433 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:39:14.829440 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 23:39:14.829448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:39:14.829456 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 23:39:14.829464 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 10 23:39:14.829472 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 23:39:14.829480 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:39:14.829487 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:39:14.829496 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:39:14.829504 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 23:39:14.829512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:39:14.829519 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 23:39:14.829527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 23:39:14.829553 systemd-journald[244]: Collecting audit messages is disabled. Jul 10 23:39:14.829572 systemd-journald[244]: Journal started Jul 10 23:39:14.829591 systemd-journald[244]: Runtime Journal (/run/log/journal/340f060b761643bbbb3d07cc36f220d1) is 6M, max 48.5M, 42.4M free. Jul 10 23:39:14.822004 systemd-modules-load[245]: Inserted module 'overlay' Jul 10 23:39:14.831855 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:39:14.833746 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:39:14.835904 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 23:39:14.837287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:39:14.850827 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 23:39:14.852840 kernel: Bridge firewalling registered Jul 10 23:39:14.852504 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 23:39:14.852517 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 10 23:39:14.854072 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:39:14.855475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:39:14.857729 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:39:14.857982 systemd-tmpfiles[261]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 10 23:39:14.861068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:39:14.867830 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:39:14.868995 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:39:14.873172 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:39:14.878595 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:39:14.893449 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 23:39:14.908826 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9ae0b1f40710648305be8f7e436b6937e65ac0b33eb84d1b5b7411684b4e7538 Jul 10 23:39:14.924380 systemd-resolved[288]: Positive Trust Anchors: Jul 10 23:39:14.924395 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:39:14.924426 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:39:14.930034 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 10 23:39:14.931013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:39:14.933650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:39:14.987845 kernel: SCSI subsystem initialized Jul 10 23:39:14.993855 kernel: Loading iSCSI transport class v2.0-870. Jul 10 23:39:15.003877 kernel: iscsi: registered transport (tcp) Jul 10 23:39:15.016840 kernel: iscsi: registered transport (qla4xxx) Jul 10 23:39:15.016903 kernel: QLogic iSCSI HBA Driver Jul 10 23:39:15.033214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:39:15.051580 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:39:15.052908 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:39:15.103366 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 23:39:15.105438 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 23:39:15.170868 kernel: raid6: neonx8 gen() 14312 MB/s Jul 10 23:39:15.187849 kernel: raid6: neonx4 gen() 14891 MB/s Jul 10 23:39:15.204836 kernel: raid6: neonx2 gen() 12958 MB/s Jul 10 23:39:15.221845 kernel: raid6: neonx1 gen() 10278 MB/s Jul 10 23:39:15.238847 kernel: raid6: int64x8 gen() 6719 MB/s Jul 10 23:39:15.255846 kernel: raid6: int64x4 gen() 7242 MB/s Jul 10 23:39:15.272829 kernel: raid6: int64x2 gen() 6023 MB/s Jul 10 23:39:15.289835 kernel: raid6: int64x1 gen() 4965 MB/s Jul 10 23:39:15.289863 kernel: raid6: using algorithm neonx4 gen() 14891 MB/s Jul 10 23:39:15.306834 kernel: raid6: .... xor() 12108 MB/s, rmw enabled Jul 10 23:39:15.306854 kernel: raid6: using neon recovery algorithm Jul 10 23:39:15.313834 kernel: xor: measuring software checksum speed Jul 10 23:39:15.314939 kernel: 8regs : 21636 MB/sec Jul 10 23:39:15.314952 kernel: 32regs : 21693 MB/sec Jul 10 23:39:15.315928 kernel: arm64_neon : 28061 MB/sec Jul 10 23:39:15.315940 kernel: xor: using function: arm64_neon (28061 MB/sec) Jul 10 23:39:15.372907 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 23:39:15.379665 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:39:15.382094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:39:15.408843 systemd-udevd[500]: Using default interface naming scheme 'v255'. Jul 10 23:39:15.412925 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:39:15.414497 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 23:39:15.446483 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Jul 10 23:39:15.469761 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:39:15.472235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:39:15.528397 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:39:15.532949 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 23:39:15.576891 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 23:39:15.577168 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 23:39:15.580120 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 23:39:15.580152 kernel: GPT:9289727 != 19775487 Jul 10 23:39:15.580162 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 23:39:15.580185 kernel: GPT:9289727 != 19775487 Jul 10 23:39:15.581099 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 23:39:15.581120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:39:15.583446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:39:15.583565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:39:15.586079 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:39:15.588026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:39:15.612401 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 23:39:15.622468 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:39:15.625184 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 23:39:15.633331 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 23:39:15.641965 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 23:39:15.644365 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 23:39:15.652025 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 23:39:15.653068 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:39:15.654822 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:39:15.656774 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:39:15.659399 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 23:39:15.660946 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 23:39:15.682525 disk-uuid[591]: Primary Header is updated. Jul 10 23:39:15.682525 disk-uuid[591]: Secondary Entries is updated. Jul 10 23:39:15.682525 disk-uuid[591]: Secondary Header is updated. Jul 10 23:39:15.688841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:39:15.690084 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:39:16.704461 disk-uuid[596]: The operation has completed successfully. Jul 10 23:39:16.705587 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 23:39:16.728488 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 23:39:16.728615 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 23:39:16.759477 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 23:39:16.790842 sh[611]: Success Jul 10 23:39:16.805828 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 23:39:16.805881 kernel: device-mapper: uevent: version 1.0.3 Jul 10 23:39:16.805892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 10 23:39:16.822481 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 10 23:39:16.853943 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 23:39:16.858625 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 23:39:16.874549 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 23:39:16.885199 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 10 23:39:16.885244 kernel: BTRFS: device fsid 1d7bf05b-5ff9-431d-b4bb-8cc553220034 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (623) Jul 10 23:39:16.889313 kernel: BTRFS info (device dm-0): first mount of filesystem 1d7bf05b-5ff9-431d-b4bb-8cc553220034 Jul 10 23:39:16.889350 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:39:16.889361 kernel: BTRFS info (device dm-0): using free-space-tree Jul 10 23:39:16.895863 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 23:39:16.896912 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 10 23:39:16.897855 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 23:39:16.898661 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 23:39:16.901115 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 23:39:16.926353 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (655) Jul 10 23:39:16.926405 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:39:16.927337 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:39:16.927366 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:39:16.934866 kernel: BTRFS info (device vda6): last unmount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:39:16.935479 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 23:39:16.937882 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 23:39:17.016644 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:39:17.019427 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:39:17.062106 systemd-networkd[798]: lo: Link UP Jul 10 23:39:17.062116 systemd-networkd[798]: lo: Gained carrier Jul 10 23:39:17.062876 systemd-networkd[798]: Enumeration completed Jul 10 23:39:17.062961 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:39:17.063364 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:39:17.063367 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:39:17.064072 systemd[1]: Reached target network.target - Network. Jul 10 23:39:17.064138 systemd-networkd[798]: eth0: Link UP Jul 10 23:39:17.064141 systemd-networkd[798]: eth0: Gained carrier Jul 10 23:39:17.064150 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:39:17.089086 ignition[697]: Ignition 2.21.0 Jul 10 23:39:17.089099 ignition[697]: Stage: fetch-offline Jul 10 23:39:17.089133 ignition[697]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:39:17.089142 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:39:17.090867 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:39:17.089317 ignition[697]: parsed url from cmdline: "" Jul 10 23:39:17.089320 ignition[697]: no config URL provided Jul 10 23:39:17.089325 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 23:39:17.089332 ignition[697]: no config at "/usr/lib/ignition/user.ign" Jul 10 23:39:17.089352 ignition[697]: op(1): [started] loading QEMU firmware config module Jul 10 23:39:17.089357 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 23:39:17.097247 ignition[697]: op(1): [finished] loading QEMU firmware config module Jul 10 23:39:17.134204 ignition[697]: parsing config with SHA512: 34909d2f08aaaa87f802b4b1ec38dcf52d38ac8a792ab03b238f29ab7f8d114f9dc3666adcbf5b7c7d065eee4415ebea792d4ec7f0caebffa79ce1700312f50d Jul 10 23:39:17.138379 unknown[697]: fetched base config from "system" Jul 10 23:39:17.138395 unknown[697]: fetched user config from "qemu" Jul 10 23:39:17.138771 ignition[697]: fetch-offline: fetch-offline passed Jul 10 23:39:17.140635 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:39:17.138871 ignition[697]: Ignition finished successfully Jul 10 23:39:17.142182 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 23:39:17.143070 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 23:39:17.168988 ignition[812]: Ignition 2.21.0 Jul 10 23:39:17.169002 ignition[812]: Stage: kargs Jul 10 23:39:17.169128 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:39:17.169137 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:39:17.169874 ignition[812]: kargs: kargs passed Jul 10 23:39:17.172879 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 23:39:17.169919 ignition[812]: Ignition finished successfully Jul 10 23:39:17.174614 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 23:39:17.197992 ignition[820]: Ignition 2.21.0 Jul 10 23:39:17.198009 ignition[820]: Stage: disks Jul 10 23:39:17.198176 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jul 10 23:39:17.198186 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:39:17.200381 ignition[820]: disks: disks passed Jul 10 23:39:17.200431 ignition[820]: Ignition finished successfully Jul 10 23:39:17.202909 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 23:39:17.203821 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 23:39:17.205203 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 23:39:17.206891 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:39:17.208489 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:39:17.209948 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:39:17.212212 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 23:39:17.242756 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 10 23:39:17.247507 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 23:39:17.249322 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 23:39:17.314798 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 23:39:17.315949 kernel: EXT4-fs (vda9): mounted filesystem 5e67f91a-7210-47f1-85b9-a7aa031a1904 r/w with ordered data mode. Quota mode: none. Jul 10 23:39:17.315856 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 23:39:17.318378 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:39:17.319898 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 23:39:17.320685 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 23:39:17.320740 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 23:39:17.320767 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:39:17.333292 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 23:39:17.335548 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 23:39:17.338084 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (838) Jul 10 23:39:17.339841 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:39:17.339879 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:39:17.339899 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:39:17.342239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:39:17.386863 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 23:39:17.391196 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Jul 10 23:39:17.394425 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 23:39:17.397298 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 23:39:17.477541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 23:39:17.479611 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 23:39:17.481246 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 23:39:17.499861 kernel: BTRFS info (device vda6): last unmount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:39:17.515955 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 23:39:17.527904 ignition[952]: INFO : Ignition 2.21.0 Jul 10 23:39:17.527904 ignition[952]: INFO : Stage: mount Jul 10 23:39:17.529483 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:39:17.529483 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:39:17.531975 ignition[952]: INFO : mount: mount passed Jul 10 23:39:17.531975 ignition[952]: INFO : Ignition finished successfully Jul 10 23:39:17.532516 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 23:39:17.534278 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 23:39:17.885386 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 23:39:17.886911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 23:39:17.910830 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (965) Jul 10 23:39:17.912418 kernel: BTRFS info (device vda6): first mount of filesystem b11340e8-a7f1-4911-a987-813f898c22db Jul 10 23:39:17.912434 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 23:39:17.912444 kernel: BTRFS info (device vda6): using free-space-tree Jul 10 23:39:17.915521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 23:39:17.943753 ignition[982]: INFO : Ignition 2.21.0 Jul 10 23:39:17.943753 ignition[982]: INFO : Stage: files Jul 10 23:39:17.945536 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:39:17.945536 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:39:17.945536 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Jul 10 23:39:17.948354 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 23:39:17.948354 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 23:39:17.950407 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 23:39:17.950407 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 23:39:17.950407 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 23:39:17.950407 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:39:17.950407 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 23:39:17.949118 unknown[982]: wrote ssh authorized keys file for user: core Jul 10 23:39:18.639147 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 23:39:18.989995 systemd-networkd[798]: eth0: Gained IPv6LL Jul 10 23:39:19.469412 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 23:39:19.469412 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:39:19.472390 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 10 23:39:19.598404 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 23:39:19.691446 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:39:19.693152 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 23:39:19.703891 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:39:19.703891 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 23:39:19.703891 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:39:19.703891 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:39:19.703891 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:39:19.703891 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 23:39:19.964852 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 23:39:20.426457 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 23:39:20.426457 ignition[982]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 10 23:39:20.429405 ignition[982]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:39:20.431904 ignition[982]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 23:39:20.431904 ignition[982]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 10 23:39:20.434183 ignition[982]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 10 23:39:20.434183 ignition[982]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 23:39:20.434183 ignition[982]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 23:39:20.434183 ignition[982]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 10 23:39:20.434183 ignition[982]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 23:39:20.475333 ignition[982]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 23:39:20.480523 ignition[982]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 23:39:20.481705 ignition[982]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 23:39:20.481705 ignition[982]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 10 23:39:20.481705 ignition[982]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 23:39:20.481705 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:39:20.481705 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 23:39:20.481705 ignition[982]: INFO : files: files passed Jul 10 23:39:20.481705 ignition[982]: INFO : Ignition finished successfully Jul 10 23:39:20.483208 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 23:39:20.485747 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 23:39:20.487333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 23:39:20.504765 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 23:39:20.504888 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 23:39:20.507048 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 23:39:20.509080 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:39:20.510439 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:39:20.511592 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 23:39:20.512504 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:39:20.513756 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 23:39:20.515876 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 23:39:20.551907 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 23:39:20.552052 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 23:39:20.553763 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 23:39:20.555082 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 23:39:20.556432 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 23:39:20.557276 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 23:39:20.583850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:39:20.586002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 23:39:20.607396 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:39:20.608618 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:39:20.610249 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 23:39:20.611565 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 23:39:20.611764 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 23:39:20.613514 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 23:39:20.614936 systemd[1]: Stopped target basic.target - Basic System. Jul 10 23:39:20.616101 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 23:39:20.617324 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 23:39:20.618726 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 23:39:20.620314 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 10 23:39:20.621637 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 23:39:20.623081 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 23:39:20.625728 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 23:39:20.626687 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 23:39:20.628155 systemd[1]: Stopped target swap.target - Swaps. Jul 10 23:39:20.629301 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 23:39:20.629435 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 23:39:20.631588 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:39:20.632981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:39:20.634416 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 23:39:20.635833 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:39:20.636737 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 23:39:20.636883 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 23:39:20.638953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 23:39:20.639069 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 23:39:20.640470 systemd[1]: Stopped target paths.target - Path Units. Jul 10 23:39:20.643753 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 23:39:20.648877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:39:20.649855 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 23:39:20.651376 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 23:39:20.652539 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 23:39:20.652628 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 23:39:20.656982 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 23:39:20.657059 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 23:39:20.658165 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 23:39:20.658283 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 23:39:20.659534 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 23:39:20.659639 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 23:39:20.661521 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 23:39:20.662535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 23:39:20.662665 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:39:20.687211 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 23:39:20.687898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 23:39:20.688036 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:39:20.689369 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 23:39:20.689458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 23:39:20.696124 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 23:39:20.696206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 23:39:20.701327 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 23:39:20.703659 ignition[1037]: INFO : Ignition 2.21.0 Jul 10 23:39:20.703659 ignition[1037]: INFO : Stage: umount Jul 10 23:39:20.704909 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 23:39:20.704909 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 23:39:20.704909 ignition[1037]: INFO : umount: umount passed Jul 10 23:39:20.705537 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 23:39:20.710742 ignition[1037]: INFO : Ignition finished successfully Jul 10 23:39:20.705629 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 23:39:20.707446 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 23:39:20.707516 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 23:39:20.709254 systemd[1]: Stopped target network.target - Network. Jul 10 23:39:20.709924 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 23:39:20.709984 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 23:39:20.711523 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 23:39:20.711569 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 23:39:20.713197 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 23:39:20.713245 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 23:39:20.714419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 23:39:20.714458 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 23:39:20.715691 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 23:39:20.715737 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 23:39:20.717264 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 23:39:20.718658 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 23:39:20.728299 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 23:39:20.728432 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 23:39:20.732875 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 10 23:39:20.733121 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 23:39:20.733164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:39:20.736195 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 10 23:39:20.737058 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 23:39:20.737185 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 23:39:20.740203 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 10 23:39:20.740342 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 10 23:39:20.741785 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 23:39:20.741834 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:39:20.744469 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 23:39:20.745262 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 23:39:20.745312 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 23:39:20.746734 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:39:20.746771 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:39:20.749983 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 23:39:20.750028 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 23:39:20.751196 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:39:20.755686 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:39:20.764340 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 23:39:20.764480 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:39:20.766086 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 23:39:20.766131 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 23:39:20.767438 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 23:39:20.767464 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:39:20.768732 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 23:39:20.768780 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 23:39:20.770800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 23:39:20.770853 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 23:39:20.772879 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 23:39:20.772925 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 23:39:20.775709 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 23:39:20.777037 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 10 23:39:20.777098 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:39:20.779859 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 23:39:20.779910 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:39:20.782481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 23:39:20.782530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:39:20.785421 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 23:39:20.787931 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 23:39:20.793270 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 23:39:20.793365 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 23:39:20.794555 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 23:39:20.796158 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 23:39:20.805642 systemd[1]: Switching root. Jul 10 23:39:20.846950 systemd-journald[244]: Journal stopped Jul 10 23:39:21.686436 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 10 23:39:21.686483 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 23:39:21.686494 kernel: SELinux: policy capability open_perms=1 Jul 10 23:39:21.686504 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 23:39:21.686513 kernel: SELinux: policy capability always_check_network=0 Jul 10 23:39:21.686522 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 23:39:21.686537 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 23:39:21.686546 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 23:39:21.686561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 23:39:21.686571 kernel: SELinux: policy capability userspace_initial_context=0 Jul 10 23:39:21.686585 kernel: audit: type=1403 audit(1752190761.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 23:39:21.686599 systemd[1]: Successfully loaded SELinux policy in 33.400ms. Jul 10 23:39:21.686615 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.700ms. Jul 10 23:39:21.686626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 10 23:39:21.686637 systemd[1]: Detected virtualization kvm. Jul 10 23:39:21.686647 systemd[1]: Detected architecture arm64. Jul 10 23:39:21.686657 systemd[1]: Detected first boot. Jul 10 23:39:21.686669 systemd[1]: Initializing machine ID from VM UUID. Jul 10 23:39:21.686679 zram_generator::config[1082]: No configuration found. Jul 10 23:39:21.686690 kernel: NET: Registered PF_VSOCK protocol family Jul 10 23:39:21.686700 systemd[1]: Populated /etc with preset unit settings. Jul 10 23:39:21.686710 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 10 23:39:21.686721 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 23:39:21.686731 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 23:39:21.686741 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 23:39:21.686754 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 23:39:21.686764 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 23:39:21.686784 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 23:39:21.686795 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 23:39:21.686806 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 23:39:21.686825 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 23:39:21.686837 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 23:39:21.686847 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 23:39:21.686857 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 23:39:21.686870 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 23:39:21.686880 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 23:39:21.686890 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 23:39:21.686900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 23:39:21.686910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 23:39:21.686925 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 23:39:21.686935 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 23:39:21.686946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 23:39:21.686957 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 23:39:21.686967 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 23:39:21.686977 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 23:39:21.686987 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 23:39:21.686997 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 23:39:21.687007 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 23:39:21.687018 systemd[1]: Reached target slices.target - Slice Units. Jul 10 23:39:21.687028 systemd[1]: Reached target swap.target - Swaps. Jul 10 23:39:21.687038 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 23:39:21.687050 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 23:39:21.687060 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 10 23:39:21.687070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 23:39:21.687080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 23:39:21.687090 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 23:39:21.687100 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 23:39:21.687110 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 23:39:21.687121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 23:39:21.687131 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 23:39:21.687142 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 23:39:21.687152 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 23:39:21.687162 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 23:39:21.687173 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 23:39:21.687183 systemd[1]: Reached target machines.target - Containers. Jul 10 23:39:21.687193 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 23:39:21.687203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:39:21.687214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 23:39:21.687226 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 23:39:21.687236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:39:21.687246 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:39:21.687256 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:39:21.687267 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 23:39:21.687276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:39:21.687287 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 23:39:21.687297 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 23:39:21.687308 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 23:39:21.687323 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 23:39:21.687332 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 23:39:21.687343 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:39:21.687353 kernel: fuse: init (API version 7.41) Jul 10 23:39:21.687362 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 23:39:21.687373 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 23:39:21.687383 kernel: loop: module loaded Jul 10 23:39:21.687393 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 23:39:21.687404 kernel: ACPI: bus type drm_connector registered Jul 10 23:39:21.687413 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 23:39:21.687424 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 10 23:39:21.687434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 23:39:21.687444 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 23:39:21.687454 systemd[1]: Stopped verity-setup.service. Jul 10 23:39:21.687465 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 23:39:21.687475 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 23:39:21.687485 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 23:39:21.687495 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 23:39:21.687505 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 23:39:21.687517 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 23:39:21.687527 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 23:39:21.687555 systemd-journald[1154]: Collecting audit messages is disabled. Jul 10 23:39:21.687577 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 23:39:21.687590 systemd-journald[1154]: Journal started Jul 10 23:39:21.687610 systemd-journald[1154]: Runtime Journal (/run/log/journal/340f060b761643bbbb3d07cc36f220d1) is 6M, max 48.5M, 42.4M free. Jul 10 23:39:21.476942 systemd[1]: Queued start job for default target multi-user.target. Jul 10 23:39:21.499984 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 23:39:21.500390 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 23:39:21.690389 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 23:39:21.691165 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 23:39:21.691345 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 23:39:21.692786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:39:21.693851 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:39:21.694972 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:39:21.695134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:39:21.696393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:39:21.696569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:39:21.698046 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 23:39:21.698214 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 23:39:21.699910 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:39:21.700075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:39:21.701181 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 23:39:21.702289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 23:39:21.703657 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 23:39:21.704966 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 10 23:39:21.718146 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 23:39:21.720558 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 23:39:21.722689 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 23:39:21.723927 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 23:39:21.723957 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 23:39:21.725971 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 10 23:39:21.734735 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 23:39:21.735693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:39:21.738849 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 23:39:21.740702 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 23:39:21.741667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:39:21.744148 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 23:39:21.745301 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:39:21.751037 systemd-journald[1154]: Time spent on flushing to /var/log/journal/340f060b761643bbbb3d07cc36f220d1 is 14.651ms for 882 entries. Jul 10 23:39:21.751037 systemd-journald[1154]: System Journal (/var/log/journal/340f060b761643bbbb3d07cc36f220d1) is 8M, max 195.6M, 187.6M free. Jul 10 23:39:21.776006 systemd-journald[1154]: Received client request to flush runtime journal. Jul 10 23:39:21.747958 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:39:21.749695 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 23:39:21.752578 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 23:39:21.773969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 23:39:21.775665 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 23:39:21.777268 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 23:39:21.780893 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 23:39:21.783845 kernel: loop0: detected capacity change from 0 to 211168 Jul 10 23:39:21.787383 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 23:39:21.789094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 23:39:21.792200 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 10 23:39:21.805879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 23:39:21.806654 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 23:39:21.809640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 23:39:21.813240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:39:21.826331 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 10 23:39:21.830843 kernel: loop1: detected capacity change from 0 to 138376 Jul 10 23:39:21.843286 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 10 23:39:21.843302 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jul 10 23:39:21.850222 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 23:39:21.874136 kernel: loop2: detected capacity change from 0 to 107312 Jul 10 23:39:21.901866 kernel: loop3: detected capacity change from 0 to 211168 Jul 10 23:39:21.908837 kernel: loop4: detected capacity change from 0 to 138376 Jul 10 23:39:21.915854 kernel: loop5: detected capacity change from 0 to 107312 Jul 10 23:39:21.920279 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 23:39:21.920655 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 10 23:39:21.924290 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 23:39:21.924309 systemd[1]: Reloading... Jul 10 23:39:21.986850 zram_generator::config[1250]: No configuration found. Jul 10 23:39:22.062889 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 23:39:22.070222 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:39:22.141572 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 23:39:22.142118 systemd[1]: Reloading finished in 217 ms. Jul 10 23:39:22.175719 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 23:39:22.178848 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 23:39:22.207322 systemd[1]: Starting ensure-sysext.service... Jul 10 23:39:22.209067 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 23:39:22.230377 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Jul 10 23:39:22.230393 systemd[1]: Reloading... Jul 10 23:39:22.237117 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 10 23:39:22.237493 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 10 23:39:22.237894 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 23:39:22.238207 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 23:39:22.238950 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 23:39:22.239260 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jul 10 23:39:22.239367 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jul 10 23:39:22.242822 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:39:22.242933 systemd-tmpfiles[1285]: Skipping /boot Jul 10 23:39:22.252047 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 23:39:22.252159 systemd-tmpfiles[1285]: Skipping /boot Jul 10 23:39:22.269874 zram_generator::config[1312]: No configuration found. Jul 10 23:39:22.348151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:39:22.418590 systemd[1]: Reloading finished in 187 ms. Jul 10 23:39:22.437525 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 23:39:22.443260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 23:39:22.449257 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:39:22.451510 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 23:39:22.453904 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 23:39:22.457357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 23:39:22.461168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 23:39:22.465033 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 23:39:22.469799 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:39:22.471682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:39:22.474354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:39:22.476399 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:39:22.477669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:39:22.477793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:39:22.481386 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 23:39:22.490555 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 23:39:22.495786 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:39:22.495987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:39:22.497420 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:39:22.497562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:39:22.505555 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jul 10 23:39:22.506975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:39:22.507178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:39:22.509156 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 23:39:22.516227 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 23:39:22.520195 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 23:39:22.522470 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 23:39:22.527122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 23:39:22.530659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 23:39:22.532460 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 23:39:22.533483 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 23:39:22.533534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 10 23:39:22.548300 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 23:39:22.550606 augenrules[1403]: No rules Jul 10 23:39:22.550900 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 23:39:22.551267 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 23:39:22.552733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 23:39:22.555680 systemd[1]: Finished ensure-sysext.service. Jul 10 23:39:22.558052 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:39:22.558242 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:39:22.560122 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 23:39:22.561519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 23:39:22.562991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 23:39:22.563148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 23:39:22.567435 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 23:39:22.568291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 23:39:22.572377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 23:39:22.572542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 23:39:22.584045 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 23:39:22.594451 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 23:39:22.595627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 23:39:22.595699 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 23:39:22.597855 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 23:39:22.633701 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 23:39:22.641383 systemd-resolved[1352]: Positive Trust Anchors: Jul 10 23:39:22.645998 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 23:39:22.646043 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 23:39:22.656862 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 10 23:39:22.658378 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 23:39:22.659928 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 23:39:22.683253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 23:39:22.687052 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 23:39:22.719407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 23:39:22.722409 systemd-networkd[1429]: lo: Link UP Jul 10 23:39:22.722415 systemd-networkd[1429]: lo: Gained carrier Jul 10 23:39:22.726725 systemd-networkd[1429]: Enumeration completed Jul 10 23:39:22.726857 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 23:39:22.727159 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:39:22.727164 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 23:39:22.727965 systemd[1]: Reached target network.target - Network. Jul 10 23:39:22.730645 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 10 23:39:22.731232 systemd-networkd[1429]: eth0: Link UP Jul 10 23:39:22.731341 systemd-networkd[1429]: eth0: Gained carrier Jul 10 23:39:22.731355 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 23:39:22.732919 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 23:39:22.733942 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 23:39:22.735343 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 23:39:22.736253 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 23:39:22.737255 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 23:39:22.738395 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 23:39:22.739451 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 23:39:22.739477 systemd[1]: Reached target paths.target - Path Units. Jul 10 23:39:22.740887 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 23:39:22.741790 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 23:39:22.742645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 23:39:22.743625 systemd[1]: Reached target timers.target - Timer Units. Jul 10 23:39:22.743871 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.34/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 23:39:22.744339 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Jul 10 23:39:22.744974 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 23:39:22.747445 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 23:39:23.195088 systemd-resolved[1352]: Clock change detected. Flushing caches. Jul 10 23:39:23.195097 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 10 23:39:23.195187 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 23:39:23.195301 systemd-timesyncd[1432]: Initial clock synchronization to Thu 2025-07-10 23:39:23.195055 UTC. Jul 10 23:39:23.196927 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 10 23:39:23.197839 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 10 23:39:23.202335 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 23:39:23.203976 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 10 23:39:23.206482 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 23:39:23.207610 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 23:39:23.208345 systemd[1]: Reached target basic.target - Basic System. Jul 10 23:39:23.209301 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:39:23.209327 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 23:39:23.210777 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 23:39:23.212902 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 23:39:23.215737 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 23:39:23.221835 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 23:39:23.223941 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 23:39:23.224695 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 23:39:23.226122 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 23:39:23.231912 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 23:39:23.233280 jq[1467]: false Jul 10 23:39:23.233639 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 23:39:23.239371 extend-filesystems[1468]: Found /dev/vda6 Jul 10 23:39:23.240946 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 23:39:23.243724 extend-filesystems[1468]: Found /dev/vda9 Jul 10 23:39:23.245336 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 23:39:23.246934 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 23:39:23.247394 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 23:39:23.249058 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 23:39:23.254689 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 23:39:23.256633 extend-filesystems[1468]: Checking size of /dev/vda9 Jul 10 23:39:23.257745 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 10 23:39:23.263791 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 23:39:23.264993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 23:39:23.265167 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 23:39:23.273379 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 23:39:23.273929 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 23:39:23.276217 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 23:39:23.276400 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 23:39:23.278118 jq[1482]: true Jul 10 23:39:23.290650 extend-filesystems[1468]: Resized partition /dev/vda9 Jul 10 23:39:23.295115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 23:39:23.301123 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 23:39:23.301464 update_engine[1481]: I20250710 23:39:23.300840 1481 main.cc:92] Flatcar Update Engine starting Jul 10 23:39:23.311723 tar[1492]: linux-arm64/LICENSE Jul 10 23:39:23.311723 tar[1492]: linux-arm64/helm Jul 10 23:39:23.315225 extend-filesystems[1506]: resize2fs 1.47.2 (1-Jan-2025) Jul 10 23:39:23.316565 jq[1507]: true Jul 10 23:39:23.334732 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 23:39:23.347065 dbus-daemon[1465]: [system] SELinux support is enabled Jul 10 23:39:23.347500 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 23:39:23.352593 update_engine[1481]: I20250710 23:39:23.351389 1481 update_check_scheduler.cc:74] Next update check in 11m53s Jul 10 23:39:23.352718 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 23:39:23.352481 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 23:39:23.352503 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 23:39:23.354716 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 23:39:23.354738 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 23:39:23.359065 systemd[1]: Started update-engine.service - Update Engine. Jul 10 23:39:23.367695 extend-filesystems[1506]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 23:39:23.367695 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 23:39:23.367695 extend-filesystems[1506]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 23:39:23.371960 extend-filesystems[1468]: Resized filesystem in /dev/vda9 Jul 10 23:39:23.384567 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 23:39:23.385933 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 23:39:23.386130 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 23:39:23.434351 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Jul 10 23:39:23.462793 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 23:39:23.463078 systemd-logind[1478]: New seat seat0. Jul 10 23:39:23.489940 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 23:39:23.491133 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 23:39:23.492391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 23:39:23.512459 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 23:39:23.526570 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 23:39:23.568115 containerd[1500]: time="2025-07-10T23:39:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 10 23:39:23.570303 containerd[1500]: time="2025-07-10T23:39:23.570258534Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 10 23:39:23.580145 containerd[1500]: time="2025-07-10T23:39:23.580092854Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.12µs" Jul 10 23:39:23.580145 containerd[1500]: time="2025-07-10T23:39:23.580133814Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 10 23:39:23.580145 containerd[1500]: time="2025-07-10T23:39:23.580153534Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 10 23:39:23.580351 containerd[1500]: time="2025-07-10T23:39:23.580330694Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 10 23:39:23.580376 containerd[1500]: time="2025-07-10T23:39:23.580351014Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 10 23:39:23.580414 containerd[1500]: time="2025-07-10T23:39:23.580386414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580459 containerd[1500]: time="2025-07-10T23:39:23.580438894Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580459 containerd[1500]: time="2025-07-10T23:39:23.580454734Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580730 containerd[1500]: time="2025-07-10T23:39:23.580698494Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580730 containerd[1500]: time="2025-07-10T23:39:23.580728654Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580776 containerd[1500]: time="2025-07-10T23:39:23.580741054Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580776 containerd[1500]: time="2025-07-10T23:39:23.580750414Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 10 23:39:23.580851 containerd[1500]: time="2025-07-10T23:39:23.580833054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 10 23:39:23.581035 containerd[1500]: time="2025-07-10T23:39:23.581016454Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 23:39:23.581071 containerd[1500]: time="2025-07-10T23:39:23.581048454Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 10 23:39:23.581071 containerd[1500]: time="2025-07-10T23:39:23.581061214Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 10 23:39:23.581111 containerd[1500]: time="2025-07-10T23:39:23.581095094Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 10 23:39:23.581364 containerd[1500]: time="2025-07-10T23:39:23.581336654Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 10 23:39:23.581427 containerd[1500]: time="2025-07-10T23:39:23.581410934Z" level=info msg="metadata content store policy set" policy=shared Jul 10 23:39:23.593367 containerd[1500]: time="2025-07-10T23:39:23.593316654Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 10 23:39:23.593454 containerd[1500]: time="2025-07-10T23:39:23.593385094Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 10 23:39:23.593454 containerd[1500]: time="2025-07-10T23:39:23.593402974Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 10 23:39:23.593454 containerd[1500]: time="2025-07-10T23:39:23.593415814Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 10 23:39:23.593454 containerd[1500]: time="2025-07-10T23:39:23.593427494Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 10 23:39:23.593454 containerd[1500]: time="2025-07-10T23:39:23.593440454Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 10 23:39:23.593454 containerd[1500]: time="2025-07-10T23:39:23.593451734Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 10 23:39:23.593572 containerd[1500]: time="2025-07-10T23:39:23.593464334Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 10 23:39:23.593572 containerd[1500]: time="2025-07-10T23:39:23.593475534Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 10 23:39:23.593572 containerd[1500]: time="2025-07-10T23:39:23.593506374Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 10 23:39:23.593572 containerd[1500]: time="2025-07-10T23:39:23.593516774Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 10 23:39:23.593572 containerd[1500]: time="2025-07-10T23:39:23.593531134Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593675014Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593702854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593729174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593745694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593756174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593766494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593782734Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593793974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593806694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593817214Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.593826894Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.594010374Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.594024374Z" level=info msg="Start snapshots syncer" Jul 10 23:39:23.594726 containerd[1500]: time="2025-07-10T23:39:23.594053574Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 10 23:39:23.594975 containerd[1500]: time="2025-07-10T23:39:23.594593454Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 10 23:39:23.594975 containerd[1500]: time="2025-07-10T23:39:23.594648574Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594761854Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594902534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594932894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594944534Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594957334Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594968694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594978934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.594994054Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.595020454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.595031294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.595041974Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 10 23:39:23.595080 containerd[1500]: time="2025-07-10T23:39:23.595078654Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595096214Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595105614Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595115214Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595123014Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595132494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595147774Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595225414Z" level=info msg="runtime interface created" Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595230614Z" level=info msg="created NRI interface" Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595239174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595250254Z" level=info msg="Connect containerd service" Jul 10 23:39:23.595272 containerd[1500]: time="2025-07-10T23:39:23.595275734Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 23:39:23.595982 containerd[1500]: time="2025-07-10T23:39:23.595955454Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:39:23.720657 containerd[1500]: time="2025-07-10T23:39:23.720264494Z" level=info msg="Start subscribing containerd event" Jul 10 23:39:23.720657 containerd[1500]: time="2025-07-10T23:39:23.720616134Z" level=info msg="Start recovering state" Jul 10 23:39:23.720784 containerd[1500]: time="2025-07-10T23:39:23.720705134Z" level=info msg="Start event monitor" Jul 10 23:39:23.720784 containerd[1500]: time="2025-07-10T23:39:23.720741534Z" level=info msg="Start cni network conf syncer for default" Jul 10 23:39:23.720784 containerd[1500]: time="2025-07-10T23:39:23.720751174Z" level=info msg="Start streaming server" Jul 10 23:39:23.720784 containerd[1500]: time="2025-07-10T23:39:23.720761654Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 10 23:39:23.720784 containerd[1500]: time="2025-07-10T23:39:23.720769014Z" level=info msg="runtime interface starting up..." Jul 10 23:39:23.720784 containerd[1500]: time="2025-07-10T23:39:23.720774774Z" level=info msg="starting plugins..." Jul 10 23:39:23.720897 containerd[1500]: time="2025-07-10T23:39:23.720789614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 10 23:39:23.721316 containerd[1500]: time="2025-07-10T23:39:23.721292294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 23:39:23.722716 containerd[1500]: time="2025-07-10T23:39:23.721347054Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 23:39:23.722716 containerd[1500]: time="2025-07-10T23:39:23.721404734Z" level=info msg="containerd successfully booted in 0.155367s" Jul 10 23:39:23.721518 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 23:39:23.757994 tar[1492]: linux-arm64/README.md Jul 10 23:39:23.775792 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 23:39:23.812567 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 23:39:23.832590 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 23:39:23.835859 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 23:39:23.861451 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 23:39:23.861658 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 23:39:23.864200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 23:39:23.900802 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 23:39:23.903702 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 23:39:23.908376 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 23:39:23.909776 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 23:39:25.130898 systemd-networkd[1429]: eth0: Gained IPv6LL Jul 10 23:39:25.134748 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 23:39:25.136119 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 23:39:25.138293 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 23:39:25.140645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:25.157976 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 23:39:25.178607 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 23:39:25.179053 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 23:39:25.181349 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 23:39:25.185418 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 23:39:25.735600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:25.737068 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 23:39:25.741940 systemd[1]: Startup finished in 2.153s (kernel) + 6.440s (initrd) + 4.271s (userspace) = 12.866s. Jul 10 23:39:25.743781 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:39:26.233181 kubelet[1604]: E0710 23:39:26.233059 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:39:26.235409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:39:26.235550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:39:26.235930 systemd[1]: kubelet.service: Consumed 863ms CPU time, 259.4M memory peak. Jul 10 23:39:28.291942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 23:39:28.293184 systemd[1]: Started sshd@0-10.0.0.34:22-10.0.0.1:36474.service - OpenSSH per-connection server daemon (10.0.0.1:36474). Jul 10 23:39:28.396065 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 36474 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:28.398428 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:28.414985 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 23:39:28.416285 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 23:39:28.419805 systemd-logind[1478]: New session 1 of user core. Jul 10 23:39:28.448560 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 23:39:28.451080 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 23:39:28.464281 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 23:39:28.467147 systemd-logind[1478]: New session c1 of user core. Jul 10 23:39:28.586132 systemd[1621]: Queued start job for default target default.target. Jul 10 23:39:28.593802 systemd[1621]: Created slice app.slice - User Application Slice. Jul 10 23:39:28.593835 systemd[1621]: Reached target paths.target - Paths. Jul 10 23:39:28.593923 systemd[1621]: Reached target timers.target - Timers. Jul 10 23:39:28.595451 systemd[1621]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 23:39:28.605497 systemd[1621]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 23:39:28.605569 systemd[1621]: Reached target sockets.target - Sockets. Jul 10 23:39:28.605612 systemd[1621]: Reached target basic.target - Basic System. Jul 10 23:39:28.605647 systemd[1621]: Reached target default.target - Main User Target. Jul 10 23:39:28.605692 systemd[1621]: Startup finished in 131ms. Jul 10 23:39:28.605904 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 23:39:28.620956 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 23:39:28.696114 systemd[1]: Started sshd@1-10.0.0.34:22-10.0.0.1:36486.service - OpenSSH per-connection server daemon (10.0.0.1:36486). Jul 10 23:39:28.760679 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 36486 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:28.761605 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:28.765827 systemd-logind[1478]: New session 2 of user core. Jul 10 23:39:28.775908 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 23:39:28.827733 sshd[1634]: Connection closed by 10.0.0.1 port 36486 Jul 10 23:39:28.827624 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:28.838307 systemd[1]: sshd@1-10.0.0.34:22-10.0.0.1:36486.service: Deactivated successfully. Jul 10 23:39:28.840369 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 23:39:28.841157 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Jul 10 23:39:28.844045 systemd[1]: Started sshd@2-10.0.0.34:22-10.0.0.1:36500.service - OpenSSH per-connection server daemon (10.0.0.1:36500). Jul 10 23:39:28.844962 systemd-logind[1478]: Removed session 2. Jul 10 23:39:28.906489 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 36500 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:28.907899 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:28.912233 systemd-logind[1478]: New session 3 of user core. Jul 10 23:39:28.923955 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 23:39:28.972297 sshd[1642]: Connection closed by 10.0.0.1 port 36500 Jul 10 23:39:28.972605 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:28.984215 systemd[1]: sshd@2-10.0.0.34:22-10.0.0.1:36500.service: Deactivated successfully. Jul 10 23:39:28.985919 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 23:39:28.986627 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Jul 10 23:39:28.988947 systemd[1]: Started sshd@3-10.0.0.34:22-10.0.0.1:36510.service - OpenSSH per-connection server daemon (10.0.0.1:36510). Jul 10 23:39:28.989923 systemd-logind[1478]: Removed session 3. Jul 10 23:39:29.033834 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 36510 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:29.034954 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:29.040879 systemd-logind[1478]: New session 4 of user core. Jul 10 23:39:29.049933 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 23:39:29.103432 sshd[1650]: Connection closed by 10.0.0.1 port 36510 Jul 10 23:39:29.103861 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:29.122246 systemd[1]: sshd@3-10.0.0.34:22-10.0.0.1:36510.service: Deactivated successfully. Jul 10 23:39:29.124059 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 23:39:29.124918 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Jul 10 23:39:29.128525 systemd[1]: Started sshd@4-10.0.0.34:22-10.0.0.1:36520.service - OpenSSH per-connection server daemon (10.0.0.1:36520). Jul 10 23:39:29.129281 systemd-logind[1478]: Removed session 4. Jul 10 23:39:29.185219 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 36520 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:29.186615 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:29.190694 systemd-logind[1478]: New session 5 of user core. Jul 10 23:39:29.203892 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 23:39:29.281884 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 23:39:29.282174 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:39:29.303440 sudo[1659]: pam_unix(sudo:session): session closed for user root Jul 10 23:39:29.307220 sshd[1658]: Connection closed by 10.0.0.1 port 36520 Jul 10 23:39:29.307419 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:29.318634 systemd[1]: sshd@4-10.0.0.34:22-10.0.0.1:36520.service: Deactivated successfully. Jul 10 23:39:29.321424 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 23:39:29.323895 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Jul 10 23:39:29.328777 systemd[1]: Started sshd@5-10.0.0.34:22-10.0.0.1:36526.service - OpenSSH per-connection server daemon (10.0.0.1:36526). Jul 10 23:39:29.329438 systemd-logind[1478]: Removed session 5. Jul 10 23:39:29.391043 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 36526 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:29.392384 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:29.397461 systemd-logind[1478]: New session 6 of user core. Jul 10 23:39:29.413996 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 23:39:29.466302 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 23:39:29.466564 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:39:29.581972 sudo[1669]: pam_unix(sudo:session): session closed for user root Jul 10 23:39:29.589636 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 10 23:39:29.589949 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:39:29.600416 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 10 23:39:29.648604 augenrules[1691]: No rules Jul 10 23:39:29.649893 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 23:39:29.650154 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 10 23:39:29.651944 sudo[1668]: pam_unix(sudo:session): session closed for user root Jul 10 23:39:29.653751 sshd[1667]: Connection closed by 10.0.0.1 port 36526 Jul 10 23:39:29.654169 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:29.666780 systemd[1]: sshd@5-10.0.0.34:22-10.0.0.1:36526.service: Deactivated successfully. Jul 10 23:39:29.668440 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 23:39:29.669213 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Jul 10 23:39:29.672079 systemd[1]: Started sshd@6-10.0.0.34:22-10.0.0.1:36540.service - OpenSSH per-connection server daemon (10.0.0.1:36540). Jul 10 23:39:29.672629 systemd-logind[1478]: Removed session 6. Jul 10 23:39:29.726822 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 36540 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:39:29.728179 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:39:29.732942 systemd-logind[1478]: New session 7 of user core. Jul 10 23:39:29.750947 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 23:39:29.804636 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 23:39:29.805300 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 23:39:30.300168 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 23:39:30.314038 (dockerd)[1724]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 23:39:30.650377 dockerd[1724]: time="2025-07-10T23:39:30.650250734Z" level=info msg="Starting up" Jul 10 23:39:30.651715 dockerd[1724]: time="2025-07-10T23:39:30.651673014Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 10 23:39:30.769196 dockerd[1724]: time="2025-07-10T23:39:30.769150334Z" level=info msg="Loading containers: start." Jul 10 23:39:30.778755 kernel: Initializing XFRM netlink socket Jul 10 23:39:31.015241 systemd-networkd[1429]: docker0: Link UP Jul 10 23:39:31.019903 dockerd[1724]: time="2025-07-10T23:39:31.019850694Z" level=info msg="Loading containers: done." Jul 10 23:39:31.041469 dockerd[1724]: time="2025-07-10T23:39:31.041407694Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 23:39:31.041619 dockerd[1724]: time="2025-07-10T23:39:31.041544654Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 10 23:39:31.041704 dockerd[1724]: time="2025-07-10T23:39:31.041673334Z" level=info msg="Initializing buildkit" Jul 10 23:39:31.068593 dockerd[1724]: time="2025-07-10T23:39:31.068537094Z" level=info msg="Completed buildkit initialization" Jul 10 23:39:31.073272 dockerd[1724]: time="2025-07-10T23:39:31.073228934Z" level=info msg="Daemon has completed initialization" Jul 10 23:39:31.073329 dockerd[1724]: time="2025-07-10T23:39:31.073283734Z" level=info msg="API listen on /run/docker.sock" Jul 10 23:39:31.073463 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 23:39:31.603916 containerd[1500]: time="2025-07-10T23:39:31.603872894Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 23:39:32.296438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447492437.mount: Deactivated successfully. Jul 10 23:39:33.491473 containerd[1500]: time="2025-07-10T23:39:33.491401254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:33.492041 containerd[1500]: time="2025-07-10T23:39:33.491998334Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 10 23:39:33.493413 containerd[1500]: time="2025-07-10T23:39:33.493368614Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:33.495551 containerd[1500]: time="2025-07-10T23:39:33.495513134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:33.496556 containerd[1500]: time="2025-07-10T23:39:33.496484734Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.89256788s" Jul 10 23:39:33.496556 containerd[1500]: time="2025-07-10T23:39:33.496519574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 23:39:33.499620 containerd[1500]: time="2025-07-10T23:39:33.499560014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 23:39:34.739013 containerd[1500]: time="2025-07-10T23:39:34.738964134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:34.740403 containerd[1500]: time="2025-07-10T23:39:34.740362254Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 10 23:39:34.741416 containerd[1500]: time="2025-07-10T23:39:34.741376814Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:34.743856 containerd[1500]: time="2025-07-10T23:39:34.743802334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:34.745015 containerd[1500]: time="2025-07-10T23:39:34.744975734Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.24536004s" Jul 10 23:39:34.745015 containerd[1500]: time="2025-07-10T23:39:34.745010774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 23:39:34.745697 containerd[1500]: time="2025-07-10T23:39:34.745497894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 23:39:35.838408 containerd[1500]: time="2025-07-10T23:39:35.838345174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:35.839236 containerd[1500]: time="2025-07-10T23:39:35.839190054Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 10 23:39:35.840464 containerd[1500]: time="2025-07-10T23:39:35.840428494Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:35.843332 containerd[1500]: time="2025-07-10T23:39:35.843293534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:35.844294 containerd[1500]: time="2025-07-10T23:39:35.844246374Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.0987136s" Jul 10 23:39:35.844326 containerd[1500]: time="2025-07-10T23:39:35.844294294Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 23:39:35.844862 containerd[1500]: time="2025-07-10T23:39:35.844818494Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 23:39:36.486040 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 23:39:36.487480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:36.638270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:36.642230 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 23:39:36.685373 kubelet[2006]: E0710 23:39:36.685312 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 23:39:36.689254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 23:39:36.689390 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 23:39:36.690870 systemd[1]: kubelet.service: Consumed 150ms CPU time, 107.7M memory peak. Jul 10 23:39:37.157900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302456186.mount: Deactivated successfully. Jul 10 23:39:37.532019 containerd[1500]: time="2025-07-10T23:39:37.531741454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:37.532857 containerd[1500]: time="2025-07-10T23:39:37.532600494Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 10 23:39:37.535458 containerd[1500]: time="2025-07-10T23:39:37.535417734Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:37.538035 containerd[1500]: time="2025-07-10T23:39:37.537995214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:37.538665 containerd[1500]: time="2025-07-10T23:39:37.538552934Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.69368232s" Jul 10 23:39:37.538665 containerd[1500]: time="2025-07-10T23:39:37.538603974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 23:39:37.539093 containerd[1500]: time="2025-07-10T23:39:37.539067334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 23:39:38.185658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4254852229.mount: Deactivated successfully. Jul 10 23:39:39.069800 containerd[1500]: time="2025-07-10T23:39:39.069752654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:39.070749 containerd[1500]: time="2025-07-10T23:39:39.070722814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 10 23:39:39.071560 containerd[1500]: time="2025-07-10T23:39:39.071499534Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:39.074424 containerd[1500]: time="2025-07-10T23:39:39.074371934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:39.075956 containerd[1500]: time="2025-07-10T23:39:39.075868214Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.53676588s" Jul 10 23:39:39.075956 containerd[1500]: time="2025-07-10T23:39:39.075898654Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 23:39:39.076448 containerd[1500]: time="2025-07-10T23:39:39.076399734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 23:39:39.541090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55793036.mount: Deactivated successfully. Jul 10 23:39:39.545294 containerd[1500]: time="2025-07-10T23:39:39.545096814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:39:39.545637 containerd[1500]: time="2025-07-10T23:39:39.545605534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 23:39:39.546649 containerd[1500]: time="2025-07-10T23:39:39.546597654Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:39:39.548666 containerd[1500]: time="2025-07-10T23:39:39.548610774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 23:39:39.549394 containerd[1500]: time="2025-07-10T23:39:39.549281214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 472.84928ms" Jul 10 23:39:39.549394 containerd[1500]: time="2025-07-10T23:39:39.549308654Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 23:39:39.549784 containerd[1500]: time="2025-07-10T23:39:39.549737974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 23:39:40.115350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071436978.mount: Deactivated successfully. Jul 10 23:39:42.573663 containerd[1500]: time="2025-07-10T23:39:42.573165934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:42.573663 containerd[1500]: time="2025-07-10T23:39:42.573602814Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 10 23:39:42.574604 containerd[1500]: time="2025-07-10T23:39:42.574566014Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:42.578134 containerd[1500]: time="2025-07-10T23:39:42.577724334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:39:42.578698 containerd[1500]: time="2025-07-10T23:39:42.578664694Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.02887212s" Jul 10 23:39:42.578698 containerd[1500]: time="2025-07-10T23:39:42.578701214Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 23:39:46.888274 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 23:39:46.889813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:46.900359 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 23:39:46.900441 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 23:39:46.900658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:46.902915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:46.923014 systemd[1]: Reload requested from client PID 2166 ('systemctl') (unit session-7.scope)... Jul 10 23:39:46.923032 systemd[1]: Reloading... Jul 10 23:39:47.000741 zram_generator::config[2215]: No configuration found. Jul 10 23:39:47.103491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:39:47.198986 systemd[1]: Reloading finished in 275 ms. Jul 10 23:39:47.247294 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 23:39:47.247376 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 23:39:47.247667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:47.247727 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95M memory peak. Jul 10 23:39:47.250444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:47.366551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:47.370644 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:39:47.403086 kubelet[2254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:39:47.403086 kubelet[2254]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:39:47.403086 kubelet[2254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:39:47.403420 kubelet[2254]: I0710 23:39:47.403111 2254 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:39:49.043752 kubelet[2254]: I0710 23:39:49.043323 2254 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:39:49.043752 kubelet[2254]: I0710 23:39:49.043356 2254 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:39:49.043752 kubelet[2254]: I0710 23:39:49.043562 2254 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:39:49.090989 kubelet[2254]: E0710 23:39:49.090934 2254 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 23:39:49.091627 kubelet[2254]: I0710 23:39:49.091600 2254 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:39:49.104325 kubelet[2254]: I0710 23:39:49.104279 2254 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 23:39:49.107131 kubelet[2254]: I0710 23:39:49.107098 2254 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:39:49.108229 kubelet[2254]: I0710 23:39:49.108166 2254 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:39:49.108476 kubelet[2254]: I0710 23:39:49.108216 2254 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:39:49.108476 kubelet[2254]: I0710 23:39:49.108457 2254 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:39:49.108476 kubelet[2254]: I0710 23:39:49.108465 2254 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:39:49.109256 kubelet[2254]: I0710 23:39:49.109220 2254 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:39:49.111864 kubelet[2254]: I0710 23:39:49.111836 2254 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:39:49.111864 kubelet[2254]: I0710 23:39:49.111858 2254 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:39:49.112187 kubelet[2254]: I0710 23:39:49.111886 2254 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:39:49.112898 kubelet[2254]: I0710 23:39:49.112882 2254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:39:49.114174 kubelet[2254]: I0710 23:39:49.114145 2254 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 23:39:49.114444 kubelet[2254]: E0710 23:39:49.114398 2254 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 23:39:49.114951 kubelet[2254]: E0710 23:39:49.114892 2254 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 23:39:49.116396 kubelet[2254]: I0710 23:39:49.115449 2254 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:39:49.116396 kubelet[2254]: W0710 23:39:49.115596 2254 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 23:39:49.123274 kubelet[2254]: I0710 23:39:49.119317 2254 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:39:49.123274 kubelet[2254]: I0710 23:39:49.119374 2254 server.go:1289] "Started kubelet" Jul 10 23:39:49.123274 kubelet[2254]: I0710 23:39:49.119784 2254 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:39:49.123274 kubelet[2254]: I0710 23:39:49.122788 2254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:39:49.126988 kubelet[2254]: I0710 23:39:49.126959 2254 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:39:49.129409 kubelet[2254]: I0710 23:39:49.129350 2254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:39:49.129667 kubelet[2254]: I0710 23:39:49.129644 2254 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:39:49.130477 kubelet[2254]: I0710 23:39:49.130444 2254 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:39:49.132829 kubelet[2254]: E0710 23:39:49.132223 2254 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 23:39:49.132829 kubelet[2254]: I0710 23:39:49.132264 2254 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:39:49.132829 kubelet[2254]: I0710 23:39:49.132464 2254 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:39:49.132829 kubelet[2254]: I0710 23:39:49.132524 2254 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:39:49.133002 kubelet[2254]: E0710 23:39:49.132941 2254 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 23:39:49.133776 kubelet[2254]: E0710 23:39:49.129826 2254 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.34:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.34:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851083d3c2b5c96 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 23:39:49.119335574 +0000 UTC m=+1.745370921,LastTimestamp:2025-07-10 23:39:49.119335574 +0000 UTC m=+1.745370921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 23:39:49.134137 kubelet[2254]: E0710 23:39:49.134100 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="200ms" Jul 10 23:39:49.134594 kubelet[2254]: E0710 23:39:49.134494 2254 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:39:49.134594 kubelet[2254]: I0710 23:39:49.134516 2254 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:39:49.134680 kubelet[2254]: I0710 23:39:49.134617 2254 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:39:49.135681 kubelet[2254]: I0710 23:39:49.135611 2254 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:39:49.145273 kubelet[2254]: I0710 23:39:49.145208 2254 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:39:49.146613 kubelet[2254]: I0710 23:39:49.146387 2254 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:39:49.146613 kubelet[2254]: I0710 23:39:49.146412 2254 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:39:49.146613 kubelet[2254]: I0710 23:39:49.146433 2254 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:39:49.146613 kubelet[2254]: I0710 23:39:49.146439 2254 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:39:49.146613 kubelet[2254]: E0710 23:39:49.146481 2254 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:39:49.152106 kubelet[2254]: I0710 23:39:49.152064 2254 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:39:49.152106 kubelet[2254]: I0710 23:39:49.152084 2254 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:39:49.152106 kubelet[2254]: I0710 23:39:49.152102 2254 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:39:49.154961 kubelet[2254]: E0710 23:39:49.154918 2254 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 23:39:49.180787 kubelet[2254]: I0710 23:39:49.180750 2254 policy_none.go:49] "None policy: Start" Jul 10 23:39:49.180787 kubelet[2254]: I0710 23:39:49.180783 2254 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:39:49.180866 kubelet[2254]: I0710 23:39:49.180796 2254 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:39:49.186380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 23:39:49.199933 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 23:39:49.203021 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 23:39:49.218620 kubelet[2254]: E0710 23:39:49.217869 2254 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:39:49.218620 kubelet[2254]: I0710 23:39:49.218073 2254 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:39:49.218620 kubelet[2254]: I0710 23:39:49.218083 2254 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:39:49.225492 kubelet[2254]: I0710 23:39:49.218966 2254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:39:49.226285 kubelet[2254]: E0710 23:39:49.226207 2254 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:39:49.226285 kubelet[2254]: E0710 23:39:49.226256 2254 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 23:39:49.286782 systemd[1]: Created slice kubepods-burstable-pod4a6f96729291c3f44e2f1d3bb5e8a6fe.slice - libcontainer container kubepods-burstable-pod4a6f96729291c3f44e2f1d3bb5e8a6fe.slice. Jul 10 23:39:49.301698 kubelet[2254]: E0710 23:39:49.301584 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:49.310401 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 23:39:49.312317 kubelet[2254]: E0710 23:39:49.312249 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:49.319296 kubelet[2254]: I0710 23:39:49.319242 2254 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:39:49.319777 kubelet[2254]: E0710 23:39:49.319698 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jul 10 23:39:49.335576 kubelet[2254]: E0710 23:39:49.335520 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="400ms" Jul 10 23:39:49.337948 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 23:39:49.340419 kubelet[2254]: E0710 23:39:49.340125 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:49.434373 kubelet[2254]: I0710 23:39:49.434329 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:49.434614 kubelet[2254]: I0710 23:39:49.434370 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:49.434614 kubelet[2254]: I0710 23:39:49.434407 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:49.434614 kubelet[2254]: I0710 23:39:49.434432 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:49.434614 kubelet[2254]: I0710 23:39:49.434453 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a6f96729291c3f44e2f1d3bb5e8a6fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a6f96729291c3f44e2f1d3bb5e8a6fe\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:49.434614 kubelet[2254]: I0710 23:39:49.434468 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a6f96729291c3f44e2f1d3bb5e8a6fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a6f96729291c3f44e2f1d3bb5e8a6fe\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:49.434783 kubelet[2254]: I0710 23:39:49.434483 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a6f96729291c3f44e2f1d3bb5e8a6fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a6f96729291c3f44e2f1d3bb5e8a6fe\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:49.434783 kubelet[2254]: I0710 23:39:49.434525 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:49.434783 kubelet[2254]: I0710 23:39:49.434558 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:49.521841 kubelet[2254]: I0710 23:39:49.521757 2254 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:39:49.522094 kubelet[2254]: E0710 23:39:49.522072 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jul 10 23:39:49.602996 kubelet[2254]: E0710 23:39:49.602896 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:49.603580 containerd[1500]: time="2025-07-10T23:39:49.603521374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a6f96729291c3f44e2f1d3bb5e8a6fe,Namespace:kube-system,Attempt:0,}" Jul 10 23:39:49.614194 kubelet[2254]: E0710 23:39:49.612789 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:49.614535 containerd[1500]: time="2025-07-10T23:39:49.614496974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 23:39:49.641682 kubelet[2254]: E0710 23:39:49.641645 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:49.642877 containerd[1500]: time="2025-07-10T23:39:49.642833854Z" level=info msg="connecting to shim 0cfdbb7d2a57a1d0b6b3e9ea5da628597c9a4918e8192cb8070ec670f0c57413" address="unix:///run/containerd/s/64de8fc5e7a0dfdcffa95c4b4db76c75d7975f1adcff5fa4c1260c29590d53cc" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:39:49.643074 containerd[1500]: time="2025-07-10T23:39:49.643038014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 23:39:49.643313 containerd[1500]: time="2025-07-10T23:39:49.643088214Z" level=info msg="connecting to shim 1672e35bfa4e6d9f54c19ba179bc263359e95da185e703013caf89c67771a2a9" address="unix:///run/containerd/s/587e620525a6d4f217a8f07c5c3d9ea667214385faa573da7067d8273ce2ca97" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:39:49.677444 containerd[1500]: time="2025-07-10T23:39:49.677399174Z" level=info msg="connecting to shim 8f9584ca170665166ef8c707b32c690535dd3065694b78578e264b5bc1c13539" address="unix:///run/containerd/s/f72b558a2243102c2091bb433c8745b27fde6c44f7b509954e77b771819fb468" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:39:49.679919 systemd[1]: Started cri-containerd-0cfdbb7d2a57a1d0b6b3e9ea5da628597c9a4918e8192cb8070ec670f0c57413.scope - libcontainer container 0cfdbb7d2a57a1d0b6b3e9ea5da628597c9a4918e8192cb8070ec670f0c57413. Jul 10 23:39:49.681135 systemd[1]: Started cri-containerd-1672e35bfa4e6d9f54c19ba179bc263359e95da185e703013caf89c67771a2a9.scope - libcontainer container 1672e35bfa4e6d9f54c19ba179bc263359e95da185e703013caf89c67771a2a9. Jul 10 23:39:49.713915 systemd[1]: Started cri-containerd-8f9584ca170665166ef8c707b32c690535dd3065694b78578e264b5bc1c13539.scope - libcontainer container 8f9584ca170665166ef8c707b32c690535dd3065694b78578e264b5bc1c13539. Jul 10 23:39:49.736436 kubelet[2254]: E0710 23:39:49.736361 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.34:6443: connect: connection refused" interval="800ms" Jul 10 23:39:49.746078 containerd[1500]: time="2025-07-10T23:39:49.746027934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4a6f96729291c3f44e2f1d3bb5e8a6fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"1672e35bfa4e6d9f54c19ba179bc263359e95da185e703013caf89c67771a2a9\"" Jul 10 23:39:49.747366 kubelet[2254]: E0710 23:39:49.747320 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:49.756260 containerd[1500]: time="2025-07-10T23:39:49.756216654Z" level=info msg="CreateContainer within sandbox \"1672e35bfa4e6d9f54c19ba179bc263359e95da185e703013caf89c67771a2a9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 23:39:49.756941 containerd[1500]: time="2025-07-10T23:39:49.756663854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cfdbb7d2a57a1d0b6b3e9ea5da628597c9a4918e8192cb8070ec670f0c57413\"" Jul 10 23:39:49.758157 kubelet[2254]: E0710 23:39:49.758134 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:49.766847 containerd[1500]: time="2025-07-10T23:39:49.766813214Z" level=info msg="CreateContainer within sandbox \"0cfdbb7d2a57a1d0b6b3e9ea5da628597c9a4918e8192cb8070ec670f0c57413\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 23:39:49.775182 containerd[1500]: time="2025-07-10T23:39:49.775145894Z" level=info msg="Container 1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:39:49.783438 containerd[1500]: time="2025-07-10T23:39:49.783397294Z" level=info msg="Container ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:39:49.789460 containerd[1500]: time="2025-07-10T23:39:49.789391054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f9584ca170665166ef8c707b32c690535dd3065694b78578e264b5bc1c13539\"" Jul 10 23:39:49.790234 kubelet[2254]: E0710 23:39:49.790210 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:49.796762 containerd[1500]: time="2025-07-10T23:39:49.796729534Z" level=info msg="CreateContainer within sandbox \"8f9584ca170665166ef8c707b32c690535dd3065694b78578e264b5bc1c13539\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 23:39:49.797091 containerd[1500]: time="2025-07-10T23:39:49.797029254Z" level=info msg="CreateContainer within sandbox \"1672e35bfa4e6d9f54c19ba179bc263359e95da185e703013caf89c67771a2a9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5\"" Jul 10 23:39:49.797987 containerd[1500]: time="2025-07-10T23:39:49.797957054Z" level=info msg="StartContainer for \"1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5\"" Jul 10 23:39:49.799070 containerd[1500]: time="2025-07-10T23:39:49.799038014Z" level=info msg="connecting to shim 1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5" address="unix:///run/containerd/s/587e620525a6d4f217a8f07c5c3d9ea667214385faa573da7067d8273ce2ca97" protocol=ttrpc version=3 Jul 10 23:39:49.800258 containerd[1500]: time="2025-07-10T23:39:49.800219974Z" level=info msg="CreateContainer within sandbox \"0cfdbb7d2a57a1d0b6b3e9ea5da628597c9a4918e8192cb8070ec670f0c57413\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493\"" Jul 10 23:39:49.800757 containerd[1500]: time="2025-07-10T23:39:49.800734654Z" level=info msg="StartContainer for \"ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493\"" Jul 10 23:39:49.801794 containerd[1500]: time="2025-07-10T23:39:49.801764014Z" level=info msg="connecting to shim ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493" address="unix:///run/containerd/s/64de8fc5e7a0dfdcffa95c4b4db76c75d7975f1adcff5fa4c1260c29590d53cc" protocol=ttrpc version=3 Jul 10 23:39:49.803701 containerd[1500]: time="2025-07-10T23:39:49.803665574Z" level=info msg="Container f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:39:49.812113 containerd[1500]: time="2025-07-10T23:39:49.811875854Z" level=info msg="CreateContainer within sandbox \"8f9584ca170665166ef8c707b32c690535dd3065694b78578e264b5bc1c13539\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4\"" Jul 10 23:39:49.812792 containerd[1500]: time="2025-07-10T23:39:49.812755494Z" level=info msg="StartContainer for \"f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4\"" Jul 10 23:39:49.813808 containerd[1500]: time="2025-07-10T23:39:49.813776214Z" level=info msg="connecting to shim f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4" address="unix:///run/containerd/s/f72b558a2243102c2091bb433c8745b27fde6c44f7b509954e77b771819fb468" protocol=ttrpc version=3 Jul 10 23:39:49.822926 systemd[1]: Started cri-containerd-1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5.scope - libcontainer container 1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5. Jul 10 23:39:49.826108 systemd[1]: Started cri-containerd-ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493.scope - libcontainer container ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493. Jul 10 23:39:49.841997 systemd[1]: Started cri-containerd-f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4.scope - libcontainer container f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4. Jul 10 23:39:49.897967 containerd[1500]: time="2025-07-10T23:39:49.897858094Z" level=info msg="StartContainer for \"1b5824ce1fba503f7a6cca34a6aedaf80d61b820c9dff3bd0bf32a0fb9dd98a5\" returns successfully" Jul 10 23:39:49.916255 containerd[1500]: time="2025-07-10T23:39:49.916217534Z" level=info msg="StartContainer for \"ce97c45259751f2b0707022aa43e6f5631e26a46db3a64dc16922fcc1f722493\" returns successfully" Jul 10 23:39:49.929442 kubelet[2254]: I0710 23:39:49.929206 2254 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:39:49.929563 kubelet[2254]: E0710 23:39:49.929529 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.34:6443/api/v1/nodes\": dial tcp 10.0.0.34:6443: connect: connection refused" node="localhost" Jul 10 23:39:49.930144 containerd[1500]: time="2025-07-10T23:39:49.929764934Z" level=info msg="StartContainer for \"f921bebc5248a774a3cf0b35ff55bf8ae919d710679e9391c1bfdceeca5429c4\" returns successfully" Jul 10 23:39:50.162818 kubelet[2254]: E0710 23:39:50.161692 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:50.162818 kubelet[2254]: E0710 23:39:50.162776 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:50.163161 kubelet[2254]: E0710 23:39:50.162037 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:50.163161 kubelet[2254]: E0710 23:39:50.162934 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:50.173268 kubelet[2254]: E0710 23:39:50.173072 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:50.173268 kubelet[2254]: E0710 23:39:50.173212 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:50.731251 kubelet[2254]: I0710 23:39:50.731210 2254 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:39:51.171569 kubelet[2254]: E0710 23:39:51.171462 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:51.173125 kubelet[2254]: E0710 23:39:51.171578 2254 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 23:39:51.173125 kubelet[2254]: E0710 23:39:51.171613 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:51.173125 kubelet[2254]: E0710 23:39:51.171734 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:52.554598 kubelet[2254]: E0710 23:39:52.554544 2254 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 23:39:52.753285 kubelet[2254]: I0710 23:39:52.753245 2254 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 23:39:52.832988 kubelet[2254]: I0710 23:39:52.832875 2254 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:52.837507 kubelet[2254]: E0710 23:39:52.837457 2254 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:52.837507 kubelet[2254]: I0710 23:39:52.837491 2254 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:52.839319 kubelet[2254]: E0710 23:39:52.839281 2254 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:52.839319 kubelet[2254]: I0710 23:39:52.839309 2254 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:52.840780 kubelet[2254]: E0710 23:39:52.840754 2254 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:53.115598 kubelet[2254]: I0710 23:39:53.115478 2254 apiserver.go:52] "Watching apiserver" Jul 10 23:39:53.133098 kubelet[2254]: I0710 23:39:53.133049 2254 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:39:54.109472 kubelet[2254]: I0710 23:39:54.109294 2254 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:54.121961 kubelet[2254]: E0710 23:39:54.121917 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:54.175799 kubelet[2254]: E0710 23:39:54.175488 2254 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:54.835273 systemd[1]: Reload requested from client PID 2537 ('systemctl') (unit session-7.scope)... Jul 10 23:39:54.835292 systemd[1]: Reloading... Jul 10 23:39:54.913753 zram_generator::config[2580]: No configuration found. Jul 10 23:39:54.988143 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 23:39:55.102200 systemd[1]: Reloading finished in 266 ms. Jul 10 23:39:55.141514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:55.154881 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 23:39:55.155818 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:55.155886 systemd[1]: kubelet.service: Consumed 2.164s CPU time, 126.8M memory peak. Jul 10 23:39:55.159958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 23:39:55.300439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 23:39:55.305540 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 23:39:55.349389 kubelet[2622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:39:55.349389 kubelet[2622]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 23:39:55.349389 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 23:39:55.349823 kubelet[2622]: I0710 23:39:55.349427 2622 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 23:39:55.357857 kubelet[2622]: I0710 23:39:55.357701 2622 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 23:39:55.357857 kubelet[2622]: I0710 23:39:55.357753 2622 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 23:39:55.358277 kubelet[2622]: I0710 23:39:55.358017 2622 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 23:39:55.360689 kubelet[2622]: I0710 23:39:55.360590 2622 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 23:39:55.363183 kubelet[2622]: I0710 23:39:55.363107 2622 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 23:39:55.369186 kubelet[2622]: I0710 23:39:55.368702 2622 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 10 23:39:55.374672 kubelet[2622]: I0710 23:39:55.374576 2622 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 23:39:55.374975 kubelet[2622]: I0710 23:39:55.374834 2622 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 23:39:55.375292 kubelet[2622]: I0710 23:39:55.374863 2622 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 23:39:55.375292 kubelet[2622]: I0710 23:39:55.375053 2622 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 23:39:55.375292 kubelet[2622]: I0710 23:39:55.375063 2622 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 23:39:55.375292 kubelet[2622]: I0710 23:39:55.375135 2622 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:39:55.376073 kubelet[2622]: I0710 23:39:55.375352 2622 kubelet.go:480] "Attempting to sync node with API server" Jul 10 23:39:55.376073 kubelet[2622]: I0710 23:39:55.375372 2622 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 23:39:55.376073 kubelet[2622]: I0710 23:39:55.375411 2622 kubelet.go:386] "Adding apiserver pod source" Jul 10 23:39:55.376073 kubelet[2622]: I0710 23:39:55.375429 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 23:39:55.378142 kubelet[2622]: I0710 23:39:55.378117 2622 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 10 23:39:55.379232 kubelet[2622]: I0710 23:39:55.379211 2622 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 23:39:55.388176 kubelet[2622]: I0710 23:39:55.388133 2622 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 23:39:55.388315 kubelet[2622]: I0710 23:39:55.388204 2622 server.go:1289] "Started kubelet" Jul 10 23:39:55.389040 kubelet[2622]: I0710 23:39:55.388974 2622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 23:39:55.389322 kubelet[2622]: I0710 23:39:55.389291 2622 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 23:39:55.389383 kubelet[2622]: I0710 23:39:55.389350 2622 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 23:39:55.390633 kubelet[2622]: I0710 23:39:55.390096 2622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 23:39:55.390633 kubelet[2622]: I0710 23:39:55.390293 2622 server.go:317] "Adding debug handlers to kubelet server" Jul 10 23:39:55.393020 kubelet[2622]: I0710 23:39:55.392988 2622 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 23:39:55.397747 kubelet[2622]: I0710 23:39:55.397551 2622 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 23:39:55.397747 kubelet[2622]: I0710 23:39:55.397674 2622 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 23:39:55.397903 kubelet[2622]: I0710 23:39:55.397838 2622 reconciler.go:26] "Reconciler: start to sync state" Jul 10 23:39:55.402734 kubelet[2622]: I0710 23:39:55.401240 2622 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 23:39:55.404645 kubelet[2622]: E0710 23:39:55.404595 2622 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 23:39:55.406008 kubelet[2622]: I0710 23:39:55.405967 2622 factory.go:223] Registration of the containerd container factory successfully Jul 10 23:39:55.406008 kubelet[2622]: I0710 23:39:55.405990 2622 factory.go:223] Registration of the systemd container factory successfully Jul 10 23:39:55.431187 kubelet[2622]: I0710 23:39:55.431129 2622 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 23:39:55.434072 kubelet[2622]: I0710 23:39:55.434030 2622 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 23:39:55.434244 kubelet[2622]: I0710 23:39:55.434154 2622 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 23:39:55.434244 kubelet[2622]: I0710 23:39:55.434177 2622 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 23:39:55.434244 kubelet[2622]: I0710 23:39:55.434184 2622 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 23:39:55.434244 kubelet[2622]: E0710 23:39:55.434230 2622 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 23:39:55.459455 kubelet[2622]: I0710 23:39:55.459421 2622 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 23:39:55.459455 kubelet[2622]: I0710 23:39:55.459440 2622 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 23:39:55.459455 kubelet[2622]: I0710 23:39:55.459463 2622 state_mem.go:36] "Initialized new in-memory state store" Jul 10 23:39:55.459664 kubelet[2622]: I0710 23:39:55.459625 2622 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 23:39:55.459664 kubelet[2622]: I0710 23:39:55.459637 2622 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 23:39:55.459664 kubelet[2622]: I0710 23:39:55.459656 2622 policy_none.go:49] "None policy: Start" Jul 10 23:39:55.459664 kubelet[2622]: I0710 23:39:55.459665 2622 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 23:39:55.459756 kubelet[2622]: I0710 23:39:55.459674 2622 state_mem.go:35] "Initializing new in-memory state store" Jul 10 23:39:55.459802 kubelet[2622]: I0710 23:39:55.459783 2622 state_mem.go:75] "Updated machine memory state" Jul 10 23:39:55.464418 kubelet[2622]: E0710 23:39:55.464382 2622 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 23:39:55.464615 kubelet[2622]: I0710 23:39:55.464586 2622 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 23:39:55.464656 kubelet[2622]: I0710 23:39:55.464605 2622 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 23:39:55.464919 kubelet[2622]: I0710 23:39:55.464897 2622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 23:39:55.468796 kubelet[2622]: E0710 23:39:55.468746 2622 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 23:39:55.535907 kubelet[2622]: I0710 23:39:55.535861 2622 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:55.535907 kubelet[2622]: I0710 23:39:55.535900 2622 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:55.536073 kubelet[2622]: I0710 23:39:55.535997 2622 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:55.542737 kubelet[2622]: E0710 23:39:55.542671 2622 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:55.571830 kubelet[2622]: I0710 23:39:55.571805 2622 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 23:39:55.579516 kubelet[2622]: I0710 23:39:55.579471 2622 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 23:39:55.579657 kubelet[2622]: I0710 23:39:55.579594 2622 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 23:39:55.698980 kubelet[2622]: I0710 23:39:55.698773 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a6f96729291c3f44e2f1d3bb5e8a6fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4a6f96729291c3f44e2f1d3bb5e8a6fe\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:55.698980 kubelet[2622]: I0710 23:39:55.698815 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:55.698980 kubelet[2622]: I0710 23:39:55.698837 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:55.698980 kubelet[2622]: I0710 23:39:55.698867 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:55.698980 kubelet[2622]: I0710 23:39:55.698886 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:55.699457 kubelet[2622]: I0710 23:39:55.698901 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a6f96729291c3f44e2f1d3bb5e8a6fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a6f96729291c3f44e2f1d3bb5e8a6fe\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:55.699457 kubelet[2622]: I0710 23:39:55.698916 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:55.699457 kubelet[2622]: I0710 23:39:55.698932 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:55.699457 kubelet[2622]: I0710 23:39:55.698946 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a6f96729291c3f44e2f1d3bb5e8a6fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4a6f96729291c3f44e2f1d3bb5e8a6fe\") " pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:55.841217 kubelet[2622]: E0710 23:39:55.841185 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:55.841201 sudo[2662]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 10 23:39:55.841637 sudo[2662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 10 23:39:55.843190 kubelet[2622]: E0710 23:39:55.842740 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:55.843430 kubelet[2622]: E0710 23:39:55.843349 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:56.307501 sudo[2662]: pam_unix(sudo:session): session closed for user root Jul 10 23:39:56.376557 kubelet[2622]: I0710 23:39:56.376272 2622 apiserver.go:52] "Watching apiserver" Jul 10 23:39:56.398402 kubelet[2622]: I0710 23:39:56.398324 2622 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 23:39:56.465065 kubelet[2622]: I0710 23:39:56.464896 2622 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:56.466377 kubelet[2622]: I0710 23:39:56.466310 2622 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:56.467232 kubelet[2622]: I0710 23:39:56.466914 2622 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:56.482010 kubelet[2622]: E0710 23:39:56.481776 2622 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 10 23:39:56.482010 kubelet[2622]: E0710 23:39:56.481999 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:56.483096 kubelet[2622]: E0710 23:39:56.482670 2622 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 23:39:56.483096 kubelet[2622]: E0710 23:39:56.482802 2622 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 23:39:56.483096 kubelet[2622]: E0710 23:39:56.482855 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:56.483096 kubelet[2622]: E0710 23:39:56.482938 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:56.508421 kubelet[2622]: I0710 23:39:56.508334 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5083154539999999 podStartE2EDuration="1.508315454s" podCreationTimestamp="2025-07-10 23:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:39:56.497917159 +0000 UTC m=+1.185709346" watchObservedRunningTime="2025-07-10 23:39:56.508315454 +0000 UTC m=+1.196107641" Jul 10 23:39:56.523631 kubelet[2622]: I0710 23:39:56.523371 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.523353239 podStartE2EDuration="2.523353239s" podCreationTimestamp="2025-07-10 23:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:39:56.514342576 +0000 UTC m=+1.202134763" watchObservedRunningTime="2025-07-10 23:39:56.523353239 +0000 UTC m=+1.211145426" Jul 10 23:39:56.523631 kubelet[2622]: I0710 23:39:56.523525 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.523520318 podStartE2EDuration="1.523520318s" podCreationTimestamp="2025-07-10 23:39:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:39:56.522862602 +0000 UTC m=+1.210654789" watchObservedRunningTime="2025-07-10 23:39:56.523520318 +0000 UTC m=+1.211312505" Jul 10 23:39:57.467891 kubelet[2622]: E0710 23:39:57.467853 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:57.469022 kubelet[2622]: E0710 23:39:57.468977 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:57.469096 kubelet[2622]: E0710 23:39:57.469029 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:57.914355 sudo[1703]: pam_unix(sudo:session): session closed for user root Jul 10 23:39:57.915980 sshd[1702]: Connection closed by 10.0.0.1 port 36540 Jul 10 23:39:57.916522 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jul 10 23:39:57.920370 systemd[1]: sshd@6-10.0.0.34:22-10.0.0.1:36540.service: Deactivated successfully. Jul 10 23:39:57.923207 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 23:39:57.923771 systemd[1]: session-7.scope: Consumed 6.842s CPU time, 261M memory peak. Jul 10 23:39:57.924806 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Jul 10 23:39:57.926484 systemd-logind[1478]: Removed session 7. Jul 10 23:39:58.469140 kubelet[2622]: E0710 23:39:58.469099 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:39:58.469498 kubelet[2622]: E0710 23:39:58.469193 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:00.818562 kubelet[2622]: I0710 23:40:00.818516 2622 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 23:40:00.818977 containerd[1500]: time="2025-07-10T23:40:00.818940204Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 23:40:00.819208 kubelet[2622]: I0710 23:40:00.819157 2622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 23:40:01.876271 systemd[1]: Created slice kubepods-burstable-pod8704ec03_be53_43d4_92b6_cb92389446c1.slice - libcontainer container kubepods-burstable-pod8704ec03_be53_43d4_92b6_cb92389446c1.slice. Jul 10 23:40:01.884665 systemd[1]: Created slice kubepods-besteffort-pod3c24302c_5d92_41f0_bcae_744b4677fc90.slice - libcontainer container kubepods-besteffort-pod3c24302c_5d92_41f0_bcae_744b4677fc90.slice. Jul 10 23:40:01.940159 kubelet[2622]: I0710 23:40:01.940109 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-config-path\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.940159 kubelet[2622]: I0710 23:40:01.940156 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-net\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.940159 kubelet[2622]: I0710 23:40:01.940172 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-hubble-tls\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.941862 kubelet[2622]: I0710 23:40:01.940188 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w86pc\" (UniqueName: \"kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-kube-api-access-w86pc\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.941862 kubelet[2622]: I0710 23:40:01.940208 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c24302c-5d92-41f0-bcae-744b4677fc90-xtables-lock\") pod \"kube-proxy-59jfz\" (UID: \"3c24302c-5d92-41f0-bcae-744b4677fc90\") " pod="kube-system/kube-proxy-59jfz" Jul 10 23:40:01.941862 kubelet[2622]: I0710 23:40:01.940225 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-cgroup\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.941862 kubelet[2622]: I0710 23:40:01.940239 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-kernel\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.941862 kubelet[2622]: I0710 23:40:01.940255 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c24302c-5d92-41f0-bcae-744b4677fc90-kube-proxy\") pod \"kube-proxy-59jfz\" (UID: \"3c24302c-5d92-41f0-bcae-744b4677fc90\") " pod="kube-system/kube-proxy-59jfz" Jul 10 23:40:01.942007 kubelet[2622]: I0710 23:40:01.940270 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv2q9\" (UniqueName: \"kubernetes.io/projected/3c24302c-5d92-41f0-bcae-744b4677fc90-kube-api-access-tv2q9\") pod \"kube-proxy-59jfz\" (UID: \"3c24302c-5d92-41f0-bcae-744b4677fc90\") " pod="kube-system/kube-proxy-59jfz" Jul 10 23:40:01.942007 kubelet[2622]: I0710 23:40:01.940284 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-lib-modules\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942007 kubelet[2622]: I0710 23:40:01.940297 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-xtables-lock\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942007 kubelet[2622]: I0710 23:40:01.940310 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8704ec03-be53-43d4-92b6-cb92389446c1-clustermesh-secrets\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942007 kubelet[2622]: I0710 23:40:01.940327 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-bpf-maps\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942007 kubelet[2622]: I0710 23:40:01.940345 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-hostproc\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942123 kubelet[2622]: I0710 23:40:01.940360 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cni-path\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942123 kubelet[2622]: I0710 23:40:01.940374 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c24302c-5d92-41f0-bcae-744b4677fc90-lib-modules\") pod \"kube-proxy-59jfz\" (UID: \"3c24302c-5d92-41f0-bcae-744b4677fc90\") " pod="kube-system/kube-proxy-59jfz" Jul 10 23:40:01.942123 kubelet[2622]: I0710 23:40:01.940389 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-run\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:01.942123 kubelet[2622]: I0710 23:40:01.940404 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-etc-cni-netd\") pod \"cilium-pzs2g\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " pod="kube-system/cilium-pzs2g" Jul 10 23:40:02.039664 systemd[1]: Created slice kubepods-besteffort-pod0a379e5e_2e05_4fb7_81b5_154c7b297b39.slice - libcontainer container kubepods-besteffort-pod0a379e5e_2e05_4fb7_81b5_154c7b297b39.slice. Jul 10 23:40:02.041202 kubelet[2622]: I0710 23:40:02.041155 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttkt\" (UniqueName: \"kubernetes.io/projected/0a379e5e-2e05-4fb7-81b5-154c7b297b39-kube-api-access-sttkt\") pod \"cilium-operator-6c4d7847fc-v29nx\" (UID: \"0a379e5e-2e05-4fb7-81b5-154c7b297b39\") " pod="kube-system/cilium-operator-6c4d7847fc-v29nx" Jul 10 23:40:02.042408 kubelet[2622]: I0710 23:40:02.042372 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a379e5e-2e05-4fb7-81b5-154c7b297b39-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-v29nx\" (UID: \"0a379e5e-2e05-4fb7-81b5-154c7b297b39\") " pod="kube-system/cilium-operator-6c4d7847fc-v29nx" Jul 10 23:40:02.182497 kubelet[2622]: E0710 23:40:02.182379 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:02.183755 containerd[1500]: time="2025-07-10T23:40:02.182992693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzs2g,Uid:8704ec03-be53-43d4-92b6-cb92389446c1,Namespace:kube-system,Attempt:0,}" Jul 10 23:40:02.200051 kubelet[2622]: E0710 23:40:02.200011 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:02.200742 containerd[1500]: time="2025-07-10T23:40:02.200507858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59jfz,Uid:3c24302c-5d92-41f0-bcae-744b4677fc90,Namespace:kube-system,Attempt:0,}" Jul 10 23:40:02.230754 containerd[1500]: time="2025-07-10T23:40:02.230686529Z" level=info msg="connecting to shim 72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216" address="unix:///run/containerd/s/b0fc99770b6afa342db6e13c65848e747a8cbcb5a0bcffe15e5c3a3f941867b5" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:40:02.231446 containerd[1500]: time="2025-07-10T23:40:02.231413686Z" level=info msg="connecting to shim f16bf68fc04a52339851a4931c8558ed0c2d7549e51afa2f85afa3cb7b20ed5a" address="unix:///run/containerd/s/293205024f3b3137146da0028b0f5734eb9d1d0f99ee5f90306306fd77c58013" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:40:02.285090 systemd[1]: Started cri-containerd-72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216.scope - libcontainer container 72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216. Jul 10 23:40:02.289785 systemd[1]: Started cri-containerd-f16bf68fc04a52339851a4931c8558ed0c2d7549e51afa2f85afa3cb7b20ed5a.scope - libcontainer container f16bf68fc04a52339851a4931c8558ed0c2d7549e51afa2f85afa3cb7b20ed5a. Jul 10 23:40:02.322607 containerd[1500]: time="2025-07-10T23:40:02.322563896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-59jfz,Uid:3c24302c-5d92-41f0-bcae-744b4677fc90,Namespace:kube-system,Attempt:0,} returns sandbox id \"f16bf68fc04a52339851a4931c8558ed0c2d7549e51afa2f85afa3cb7b20ed5a\"" Jul 10 23:40:02.324196 containerd[1500]: time="2025-07-10T23:40:02.324133289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pzs2g,Uid:8704ec03-be53-43d4-92b6-cb92389446c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\"" Jul 10 23:40:02.328817 kubelet[2622]: E0710 23:40:02.328790 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:02.329832 kubelet[2622]: E0710 23:40:02.329809 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:02.341146 containerd[1500]: time="2025-07-10T23:40:02.341104297Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 23:40:02.346420 containerd[1500]: time="2025-07-10T23:40:02.346380754Z" level=info msg="CreateContainer within sandbox \"f16bf68fc04a52339851a4931c8558ed0c2d7549e51afa2f85afa3cb7b20ed5a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 23:40:02.349020 kubelet[2622]: E0710 23:40:02.348027 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:02.349142 containerd[1500]: time="2025-07-10T23:40:02.348440785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v29nx,Uid:0a379e5e-2e05-4fb7-81b5-154c7b297b39,Namespace:kube-system,Attempt:0,}" Jul 10 23:40:02.359328 containerd[1500]: time="2025-07-10T23:40:02.359282179Z" level=info msg="Container 227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:02.383683 containerd[1500]: time="2025-07-10T23:40:02.383483915Z" level=info msg="CreateContainer within sandbox \"f16bf68fc04a52339851a4931c8558ed0c2d7549e51afa2f85afa3cb7b20ed5a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c\"" Jul 10 23:40:02.385821 containerd[1500]: time="2025-07-10T23:40:02.385753786Z" level=info msg="StartContainer for \"227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c\"" Jul 10 23:40:02.389648 containerd[1500]: time="2025-07-10T23:40:02.389600049Z" level=info msg="connecting to shim 227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c" address="unix:///run/containerd/s/293205024f3b3137146da0028b0f5734eb9d1d0f99ee5f90306306fd77c58013" protocol=ttrpc version=3 Jul 10 23:40:02.422732 containerd[1500]: time="2025-07-10T23:40:02.422651908Z" level=info msg="connecting to shim da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf" address="unix:///run/containerd/s/67deeb15e7dc5efa5ea15164e51e96558cde1fedffce0f827e7f040e314f24fb" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:40:02.428894 systemd[1]: Started cri-containerd-227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c.scope - libcontainer container 227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c. Jul 10 23:40:02.453901 systemd[1]: Started cri-containerd-da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf.scope - libcontainer container da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf. Jul 10 23:40:02.483068 containerd[1500]: time="2025-07-10T23:40:02.483015769Z" level=info msg="StartContainer for \"227dbace0827641054a962f3e72eb52bc7e91f637999ccc1e13f94fdff908a4c\" returns successfully" Jul 10 23:40:02.539356 containerd[1500]: time="2025-07-10T23:40:02.539310169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-v29nx,Uid:0a379e5e-2e05-4fb7-81b5-154c7b297b39,Namespace:kube-system,Attempt:0,} returns sandbox id \"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\"" Jul 10 23:40:02.540293 kubelet[2622]: E0710 23:40:02.540267 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:02.718740 kubelet[2622]: E0710 23:40:02.718596 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:03.486378 kubelet[2622]: E0710 23:40:03.486337 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:03.487586 kubelet[2622]: E0710 23:40:03.487559 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:03.500968 kubelet[2622]: I0710 23:40:03.500887 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-59jfz" podStartSLOduration=2.500870867 podStartE2EDuration="2.500870867s" podCreationTimestamp="2025-07-10 23:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:40:03.500822068 +0000 UTC m=+8.188614255" watchObservedRunningTime="2025-07-10 23:40:03.500870867 +0000 UTC m=+8.188663054" Jul 10 23:40:04.491543 kubelet[2622]: E0710 23:40:04.491426 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:04.493551 kubelet[2622]: E0710 23:40:04.493454 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:06.360202 kubelet[2622]: E0710 23:40:06.360104 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:07.487739 kubelet[2622]: E0710 23:40:07.487637 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:08.875893 update_engine[1481]: I20250710 23:40:08.875829 1481 update_attempter.cc:509] Updating boot flags... Jul 10 23:40:14.144004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2904050958.mount: Deactivated successfully. Jul 10 23:40:15.605409 containerd[1500]: time="2025-07-10T23:40:15.605351273Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:40:15.620351 containerd[1500]: time="2025-07-10T23:40:15.620301325Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 10 23:40:15.622167 containerd[1500]: time="2025-07-10T23:40:15.622106842Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:40:15.624217 containerd[1500]: time="2025-07-10T23:40:15.624171718Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.283023701s" Jul 10 23:40:15.624420 containerd[1500]: time="2025-07-10T23:40:15.624327397Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 23:40:15.628475 containerd[1500]: time="2025-07-10T23:40:15.628404350Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 23:40:15.643743 containerd[1500]: time="2025-07-10T23:40:15.643171603Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:40:15.649763 containerd[1500]: time="2025-07-10T23:40:15.649700911Z" level=info msg="Container af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:15.656209 containerd[1500]: time="2025-07-10T23:40:15.656162699Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\"" Jul 10 23:40:15.657484 containerd[1500]: time="2025-07-10T23:40:15.657456576Z" level=info msg="StartContainer for \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\"" Jul 10 23:40:15.658328 containerd[1500]: time="2025-07-10T23:40:15.658297735Z" level=info msg="connecting to shim af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325" address="unix:///run/containerd/s/b0fc99770b6afa342db6e13c65848e747a8cbcb5a0bcffe15e5c3a3f941867b5" protocol=ttrpc version=3 Jul 10 23:40:15.681918 systemd[1]: Started cri-containerd-af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325.scope - libcontainer container af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325. Jul 10 23:40:15.718352 containerd[1500]: time="2025-07-10T23:40:15.718304184Z" level=info msg="StartContainer for \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" returns successfully" Jul 10 23:40:15.790029 systemd[1]: cri-containerd-af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325.scope: Deactivated successfully. Jul 10 23:40:15.814979 containerd[1500]: time="2025-07-10T23:40:15.814923245Z" level=info msg="received exit event container_id:\"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" id:\"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" pid:3066 exited_at:{seconds:1752190815 nanos:796734879}" Jul 10 23:40:15.816027 containerd[1500]: time="2025-07-10T23:40:15.815982723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" id:\"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" pid:3066 exited_at:{seconds:1752190815 nanos:796734879}" Jul 10 23:40:15.857988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325-rootfs.mount: Deactivated successfully. Jul 10 23:40:16.521824 kubelet[2622]: E0710 23:40:16.521772 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:16.527952 containerd[1500]: time="2025-07-10T23:40:16.527897907Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:40:16.553111 containerd[1500]: time="2025-07-10T23:40:16.553063384Z" level=info msg="Container 40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:16.566564 containerd[1500]: time="2025-07-10T23:40:16.566516800Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\"" Jul 10 23:40:16.567455 containerd[1500]: time="2025-07-10T23:40:16.567382439Z" level=info msg="StartContainer for \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\"" Jul 10 23:40:16.569412 containerd[1500]: time="2025-07-10T23:40:16.569136916Z" level=info msg="connecting to shim 40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77" address="unix:///run/containerd/s/b0fc99770b6afa342db6e13c65848e747a8cbcb5a0bcffe15e5c3a3f941867b5" protocol=ttrpc version=3 Jul 10 23:40:16.595941 systemd[1]: Started cri-containerd-40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77.scope - libcontainer container 40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77. Jul 10 23:40:16.642897 containerd[1500]: time="2025-07-10T23:40:16.642782268Z" level=info msg="StartContainer for \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" returns successfully" Jul 10 23:40:16.667653 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 23:40:16.668162 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:40:16.668551 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:40:16.671645 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 23:40:16.673662 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 23:40:16.678228 systemd[1]: cri-containerd-40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77.scope: Deactivated successfully. Jul 10 23:40:16.683743 containerd[1500]: time="2025-07-10T23:40:16.683681317Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" id:\"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" pid:3115 exited_at:{seconds:1752190816 nanos:680862842}" Jul 10 23:40:16.684478 containerd[1500]: time="2025-07-10T23:40:16.684409196Z" level=info msg="received exit event container_id:\"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" id:\"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" pid:3115 exited_at:{seconds:1752190816 nanos:680862842}" Jul 10 23:40:16.711486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 23:40:16.715189 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77-rootfs.mount: Deactivated successfully. Jul 10 23:40:16.980171 containerd[1500]: time="2025-07-10T23:40:16.980099123Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:40:16.981468 containerd[1500]: time="2025-07-10T23:40:16.981434441Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 10 23:40:16.982436 containerd[1500]: time="2025-07-10T23:40:16.982404959Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 23:40:16.983629 containerd[1500]: time="2025-07-10T23:40:16.983589517Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.355001887s" Jul 10 23:40:16.983629 containerd[1500]: time="2025-07-10T23:40:16.983625477Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 23:40:16.993658 containerd[1500]: time="2025-07-10T23:40:16.993610700Z" level=info msg="CreateContainer within sandbox \"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 23:40:17.001777 containerd[1500]: time="2025-07-10T23:40:17.001725446Z" level=info msg="Container 8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:17.009145 containerd[1500]: time="2025-07-10T23:40:17.009096434Z" level=info msg="CreateContainer within sandbox \"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\"" Jul 10 23:40:17.009850 containerd[1500]: time="2025-07-10T23:40:17.009819513Z" level=info msg="StartContainer for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\"" Jul 10 23:40:17.011761 containerd[1500]: time="2025-07-10T23:40:17.011725390Z" level=info msg="connecting to shim 8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749" address="unix:///run/containerd/s/67deeb15e7dc5efa5ea15164e51e96558cde1fedffce0f827e7f040e314f24fb" protocol=ttrpc version=3 Jul 10 23:40:17.032922 systemd[1]: Started cri-containerd-8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749.scope - libcontainer container 8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749. Jul 10 23:40:17.069924 containerd[1500]: time="2025-07-10T23:40:17.069881015Z" level=info msg="StartContainer for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" returns successfully" Jul 10 23:40:17.528054 kubelet[2622]: E0710 23:40:17.528002 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:17.562217 containerd[1500]: time="2025-07-10T23:40:17.562174655Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:40:17.567199 kubelet[2622]: E0710 23:40:17.567156 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:17.580982 containerd[1500]: time="2025-07-10T23:40:17.580932624Z" level=info msg="Container deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:17.590539 kubelet[2622]: I0710 23:40:17.590476 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-v29nx" podStartSLOduration=2.147290213 podStartE2EDuration="16.590458289s" podCreationTimestamp="2025-07-10 23:40:01 +0000 UTC" firstStartedPulling="2025-07-10 23:40:02.54124972 +0000 UTC m=+7.229041867" lastFinishedPulling="2025-07-10 23:40:16.984417756 +0000 UTC m=+21.672209943" observedRunningTime="2025-07-10 23:40:17.587607094 +0000 UTC m=+22.275399281" watchObservedRunningTime="2025-07-10 23:40:17.590458289 +0000 UTC m=+22.278250476" Jul 10 23:40:17.594594 containerd[1500]: time="2025-07-10T23:40:17.594542282Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\"" Jul 10 23:40:17.596532 containerd[1500]: time="2025-07-10T23:40:17.596414679Z" level=info msg="StartContainer for \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\"" Jul 10 23:40:17.600744 containerd[1500]: time="2025-07-10T23:40:17.600107833Z" level=info msg="connecting to shim deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0" address="unix:///run/containerd/s/b0fc99770b6afa342db6e13c65848e747a8cbcb5a0bcffe15e5c3a3f941867b5" protocol=ttrpc version=3 Jul 10 23:40:17.619882 systemd[1]: Started cri-containerd-deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0.scope - libcontainer container deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0. Jul 10 23:40:17.675582 containerd[1500]: time="2025-07-10T23:40:17.673980993Z" level=info msg="StartContainer for \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" returns successfully" Jul 10 23:40:17.690898 systemd[1]: cri-containerd-deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0.scope: Deactivated successfully. Jul 10 23:40:17.709146 containerd[1500]: time="2025-07-10T23:40:17.709105856Z" level=info msg="TaskExit event in podsandbox handler container_id:\"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" id:\"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" pid:3215 exited_at:{seconds:1752190817 nanos:708759337}" Jul 10 23:40:17.709279 containerd[1500]: time="2025-07-10T23:40:17.709183936Z" level=info msg="received exit event container_id:\"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" id:\"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" pid:3215 exited_at:{seconds:1752190817 nanos:708759337}" Jul 10 23:40:17.744213 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0-rootfs.mount: Deactivated successfully. Jul 10 23:40:18.538233 kubelet[2622]: E0710 23:40:18.538155 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:18.538985 kubelet[2622]: E0710 23:40:18.538860 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:18.547193 containerd[1500]: time="2025-07-10T23:40:18.547135629Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:40:18.564610 containerd[1500]: time="2025-07-10T23:40:18.564559523Z" level=info msg="Container 5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:18.585308 containerd[1500]: time="2025-07-10T23:40:18.585262251Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\"" Jul 10 23:40:18.585950 containerd[1500]: time="2025-07-10T23:40:18.585889730Z" level=info msg="StartContainer for \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\"" Jul 10 23:40:18.586717 containerd[1500]: time="2025-07-10T23:40:18.586671129Z" level=info msg="connecting to shim 5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0" address="unix:///run/containerd/s/b0fc99770b6afa342db6e13c65848e747a8cbcb5a0bcffe15e5c3a3f941867b5" protocol=ttrpc version=3 Jul 10 23:40:18.609874 systemd[1]: Started cri-containerd-5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0.scope - libcontainer container 5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0. Jul 10 23:40:18.642513 systemd[1]: cri-containerd-5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0.scope: Deactivated successfully. Jul 10 23:40:18.644692 containerd[1500]: time="2025-07-10T23:40:18.643833562Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" id:\"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" pid:3254 exited_at:{seconds:1752190818 nanos:642944643}" Jul 10 23:40:18.648319 containerd[1500]: time="2025-07-10T23:40:18.648290835Z" level=info msg="received exit event container_id:\"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" id:\"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" pid:3254 exited_at:{seconds:1752190818 nanos:642944643}" Jul 10 23:40:18.649303 containerd[1500]: time="2025-07-10T23:40:18.649267394Z" level=info msg="StartContainer for \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" returns successfully" Jul 10 23:40:18.670661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0-rootfs.mount: Deactivated successfully. Jul 10 23:40:19.544762 kubelet[2622]: E0710 23:40:19.544290 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:19.550571 containerd[1500]: time="2025-07-10T23:40:19.549654594Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:40:19.608745 containerd[1500]: time="2025-07-10T23:40:19.608589070Z" level=info msg="Container 44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:19.619176 containerd[1500]: time="2025-07-10T23:40:19.619005935Z" level=info msg="CreateContainer within sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\"" Jul 10 23:40:19.619897 containerd[1500]: time="2025-07-10T23:40:19.619867294Z" level=info msg="StartContainer for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\"" Jul 10 23:40:19.621002 containerd[1500]: time="2025-07-10T23:40:19.620972932Z" level=info msg="connecting to shim 44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf" address="unix:///run/containerd/s/b0fc99770b6afa342db6e13c65848e747a8cbcb5a0bcffe15e5c3a3f941867b5" protocol=ttrpc version=3 Jul 10 23:40:19.641904 systemd[1]: Started cri-containerd-44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf.scope - libcontainer container 44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf. Jul 10 23:40:19.675835 containerd[1500]: time="2025-07-10T23:40:19.675395214Z" level=info msg="StartContainer for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" returns successfully" Jul 10 23:40:19.800398 containerd[1500]: time="2025-07-10T23:40:19.800290756Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" id:\"1384ed2147b6f8c91c49984b7dfdc042c8569bfd48d69cf539dc92d44924df43\" pid:3323 exited_at:{seconds:1752190819 nanos:800000316}" Jul 10 23:40:19.889087 kubelet[2622]: I0710 23:40:19.889057 2622 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 23:40:19.989924 kubelet[2622]: I0710 23:40:19.988419 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57c66c87-8aa5-4cf3-bf21-24cb42d3bea9-config-volume\") pod \"coredns-674b8bbfcf-x266v\" (UID: \"57c66c87-8aa5-4cf3-bf21-24cb42d3bea9\") " pod="kube-system/coredns-674b8bbfcf-x266v" Jul 10 23:40:19.989924 kubelet[2622]: I0710 23:40:19.988460 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/070a0e66-6948-4229-8798-1d702427907f-config-volume\") pod \"coredns-674b8bbfcf-m8wpg\" (UID: \"070a0e66-6948-4229-8798-1d702427907f\") " pod="kube-system/coredns-674b8bbfcf-m8wpg" Jul 10 23:40:19.990203 kubelet[2622]: I0710 23:40:19.988479 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjt24\" (UniqueName: \"kubernetes.io/projected/070a0e66-6948-4229-8798-1d702427907f-kube-api-access-tjt24\") pod \"coredns-674b8bbfcf-m8wpg\" (UID: \"070a0e66-6948-4229-8798-1d702427907f\") " pod="kube-system/coredns-674b8bbfcf-m8wpg" Jul 10 23:40:19.990203 kubelet[2622]: I0710 23:40:19.990147 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bs7\" (UniqueName: \"kubernetes.io/projected/57c66c87-8aa5-4cf3-bf21-24cb42d3bea9-kube-api-access-z8bs7\") pod \"coredns-674b8bbfcf-x266v\" (UID: \"57c66c87-8aa5-4cf3-bf21-24cb42d3bea9\") " pod="kube-system/coredns-674b8bbfcf-x266v" Jul 10 23:40:19.997434 systemd[1]: Created slice kubepods-burstable-pod57c66c87_8aa5_4cf3_bf21_24cb42d3bea9.slice - libcontainer container kubepods-burstable-pod57c66c87_8aa5_4cf3_bf21_24cb42d3bea9.slice. Jul 10 23:40:20.002927 systemd[1]: Created slice kubepods-burstable-pod070a0e66_6948_4229_8798_1d702427907f.slice - libcontainer container kubepods-burstable-pod070a0e66_6948_4229_8798_1d702427907f.slice. Jul 10 23:40:20.308851 kubelet[2622]: E0710 23:40:20.308806 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:20.309096 kubelet[2622]: E0710 23:40:20.309068 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:20.309855 containerd[1500]: time="2025-07-10T23:40:20.309818135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x266v,Uid:57c66c87-8aa5-4cf3-bf21-24cb42d3bea9,Namespace:kube-system,Attempt:0,}" Jul 10 23:40:20.311845 containerd[1500]: time="2025-07-10T23:40:20.311809693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8wpg,Uid:070a0e66-6948-4229-8798-1d702427907f,Namespace:kube-system,Attempt:0,}" Jul 10 23:40:20.550764 kubelet[2622]: E0710 23:40:20.550730 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:21.552704 kubelet[2622]: E0710 23:40:21.552663 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:22.217645 systemd-networkd[1429]: cilium_host: Link UP Jul 10 23:40:22.218233 systemd-networkd[1429]: cilium_net: Link UP Jul 10 23:40:22.218378 systemd-networkd[1429]: cilium_host: Gained carrier Jul 10 23:40:22.219000 systemd-networkd[1429]: cilium_net: Gained carrier Jul 10 23:40:22.315057 systemd-networkd[1429]: cilium_vxlan: Link UP Jul 10 23:40:22.315064 systemd-networkd[1429]: cilium_vxlan: Gained carrier Jul 10 23:40:22.561941 kubelet[2622]: E0710 23:40:22.561909 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:22.740753 kernel: NET: Registered PF_ALG protocol family Jul 10 23:40:22.858901 systemd-networkd[1429]: cilium_net: Gained IPv6LL Jul 10 23:40:22.986904 systemd-networkd[1429]: cilium_host: Gained IPv6LL Jul 10 23:40:23.386595 systemd-networkd[1429]: lxc_health: Link UP Jul 10 23:40:23.387579 systemd-networkd[1429]: lxc_health: Gained carrier Jul 10 23:40:23.515738 kernel: eth0: renamed from tmp4749d Jul 10 23:40:23.515756 systemd-networkd[1429]: lxcf67af0dc6d1e: Link UP Jul 10 23:40:23.530761 kernel: eth0: renamed from tmpd16a4 Jul 10 23:40:23.531053 systemd-networkd[1429]: lxcf67af0dc6d1e: Gained carrier Jul 10 23:40:23.531927 systemd-networkd[1429]: lxc4165d66e0a05: Link UP Jul 10 23:40:23.535328 systemd-networkd[1429]: lxc4165d66e0a05: Gained carrier Jul 10 23:40:24.138935 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL Jul 10 23:40:24.200055 kubelet[2622]: E0710 23:40:24.199881 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:24.272052 kubelet[2622]: I0710 23:40:24.271972 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pzs2g" podStartSLOduration=9.98468478 podStartE2EDuration="23.271945513s" podCreationTimestamp="2025-07-10 23:40:01 +0000 UTC" firstStartedPulling="2025-07-10 23:40:02.340744018 +0000 UTC m=+7.028536205" lastFinishedPulling="2025-07-10 23:40:15.628004751 +0000 UTC m=+20.315796938" observedRunningTime="2025-07-10 23:40:20.570800386 +0000 UTC m=+25.258592653" watchObservedRunningTime="2025-07-10 23:40:24.271945513 +0000 UTC m=+28.959737700" Jul 10 23:40:24.531257 systemd[1]: Started sshd@7-10.0.0.34:22-10.0.0.1:38168.service - OpenSSH per-connection server daemon (10.0.0.1:38168). Jul 10 23:40:24.568994 kubelet[2622]: E0710 23:40:24.568947 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:24.595885 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 38168 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:24.598096 sshd-session[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:24.603370 systemd-logind[1478]: New session 8 of user core. Jul 10 23:40:24.611996 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 23:40:24.767376 sshd[3808]: Connection closed by 10.0.0.1 port 38168 Jul 10 23:40:24.768283 sshd-session[3806]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:24.772056 systemd[1]: sshd@7-10.0.0.34:22-10.0.0.1:38168.service: Deactivated successfully. Jul 10 23:40:24.774271 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 23:40:24.776863 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Jul 10 23:40:24.778072 systemd-logind[1478]: Removed session 8. Jul 10 23:40:24.778847 systemd-networkd[1429]: lxc4165d66e0a05: Gained IPv6LL Jul 10 23:40:25.291881 systemd-networkd[1429]: lxc_health: Gained IPv6LL Jul 10 23:40:25.292268 systemd-networkd[1429]: lxcf67af0dc6d1e: Gained IPv6LL Jul 10 23:40:27.578232 containerd[1500]: time="2025-07-10T23:40:27.578177107Z" level=info msg="connecting to shim d16a4521773a3ca8c216422eb31cc4b7947996a91bddb306ed50c43356fb72af" address="unix:///run/containerd/s/5515eb561d6f6de09679e8001eaf76961a1bb751124638bf5d0a4b2d6db6e1a6" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:40:27.590078 containerd[1500]: time="2025-07-10T23:40:27.590026897Z" level=info msg="connecting to shim 4749d380fd5e229385899ad660f203222d909575640072be425e534372c618b6" address="unix:///run/containerd/s/4491808c3d7cae9683527ab1d7ea0bdc3e29527c75e83853d5bec9b83a4ba410" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:40:27.605969 systemd[1]: Started cri-containerd-d16a4521773a3ca8c216422eb31cc4b7947996a91bddb306ed50c43356fb72af.scope - libcontainer container d16a4521773a3ca8c216422eb31cc4b7947996a91bddb306ed50c43356fb72af. Jul 10 23:40:27.609749 systemd[1]: Started cri-containerd-4749d380fd5e229385899ad660f203222d909575640072be425e534372c618b6.scope - libcontainer container 4749d380fd5e229385899ad660f203222d909575640072be425e534372c618b6. Jul 10 23:40:27.621214 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:40:27.626029 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 23:40:27.646878 containerd[1500]: time="2025-07-10T23:40:27.646824009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m8wpg,Uid:070a0e66-6948-4229-8798-1d702427907f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d16a4521773a3ca8c216422eb31cc4b7947996a91bddb306ed50c43356fb72af\"" Jul 10 23:40:27.648514 kubelet[2622]: E0710 23:40:27.648364 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:27.658726 containerd[1500]: time="2025-07-10T23:40:27.658651559Z" level=info msg="CreateContainer within sandbox \"d16a4521773a3ca8c216422eb31cc4b7947996a91bddb306ed50c43356fb72af\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:40:27.659597 containerd[1500]: time="2025-07-10T23:40:27.659543758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x266v,Uid:57c66c87-8aa5-4cf3-bf21-24cb42d3bea9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4749d380fd5e229385899ad660f203222d909575640072be425e534372c618b6\"" Jul 10 23:40:27.660748 kubelet[2622]: E0710 23:40:27.660258 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:27.666235 containerd[1500]: time="2025-07-10T23:40:27.666191152Z" level=info msg="CreateContainer within sandbox \"4749d380fd5e229385899ad660f203222d909575640072be425e534372c618b6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 23:40:27.673669 containerd[1500]: time="2025-07-10T23:40:27.673625906Z" level=info msg="Container 9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:27.684618 containerd[1500]: time="2025-07-10T23:40:27.684480777Z" level=info msg="CreateContainer within sandbox \"d16a4521773a3ca8c216422eb31cc4b7947996a91bddb306ed50c43356fb72af\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a\"" Jul 10 23:40:27.685700 containerd[1500]: time="2025-07-10T23:40:27.685519816Z" level=info msg="Container b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:40:27.686653 containerd[1500]: time="2025-07-10T23:40:27.685987416Z" level=info msg="StartContainer for \"9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a\"" Jul 10 23:40:27.687139 containerd[1500]: time="2025-07-10T23:40:27.687104415Z" level=info msg="connecting to shim 9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a" address="unix:///run/containerd/s/5515eb561d6f6de09679e8001eaf76961a1bb751124638bf5d0a4b2d6db6e1a6" protocol=ttrpc version=3 Jul 10 23:40:27.694674 containerd[1500]: time="2025-07-10T23:40:27.694480408Z" level=info msg="CreateContainer within sandbox \"4749d380fd5e229385899ad660f203222d909575640072be425e534372c618b6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6\"" Jul 10 23:40:27.695230 containerd[1500]: time="2025-07-10T23:40:27.695203768Z" level=info msg="StartContainer for \"b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6\"" Jul 10 23:40:27.696263 containerd[1500]: time="2025-07-10T23:40:27.696232167Z" level=info msg="connecting to shim b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6" address="unix:///run/containerd/s/4491808c3d7cae9683527ab1d7ea0bdc3e29527c75e83853d5bec9b83a4ba410" protocol=ttrpc version=3 Jul 10 23:40:27.723096 systemd[1]: Started cri-containerd-9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a.scope - libcontainer container 9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a. Jul 10 23:40:27.733931 systemd[1]: Started cri-containerd-b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6.scope - libcontainer container b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6. Jul 10 23:40:27.769358 containerd[1500]: time="2025-07-10T23:40:27.769216825Z" level=info msg="StartContainer for \"9470f472c657d68677913c7ac8d7fb01321a7a504e9510ae29d9da6f0af6dc2a\" returns successfully" Jul 10 23:40:27.781989 containerd[1500]: time="2025-07-10T23:40:27.781852574Z" level=info msg="StartContainer for \"b9f901bba4dbde285f40b2a670bae96534991aed3ced1082b3aca6ba48dc71c6\" returns successfully" Jul 10 23:40:28.563309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460252810.mount: Deactivated successfully. Jul 10 23:40:28.584047 kubelet[2622]: E0710 23:40:28.583957 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:28.586963 kubelet[2622]: E0710 23:40:28.586839 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:28.614930 kubelet[2622]: I0710 23:40:28.614837 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x266v" podStartSLOduration=27.614788776 podStartE2EDuration="27.614788776s" podCreationTimestamp="2025-07-10 23:40:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:40:28.611842579 +0000 UTC m=+33.299634766" watchObservedRunningTime="2025-07-10 23:40:28.614788776 +0000 UTC m=+33.302581003" Jul 10 23:40:28.661662 kubelet[2622]: I0710 23:40:28.660196 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m8wpg" podStartSLOduration=26.6601789 podStartE2EDuration="26.6601789s" podCreationTimestamp="2025-07-10 23:40:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:40:28.659660141 +0000 UTC m=+33.347452368" watchObservedRunningTime="2025-07-10 23:40:28.6601789 +0000 UTC m=+33.347971047" Jul 10 23:40:29.588261 kubelet[2622]: E0710 23:40:29.588213 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:29.588450 kubelet[2622]: E0710 23:40:29.588412 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:29.783737 systemd[1]: Started sshd@8-10.0.0.34:22-10.0.0.1:38170.service - OpenSSH per-connection server daemon (10.0.0.1:38170). Jul 10 23:40:29.876305 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 38170 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:29.877990 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:29.883112 systemd-logind[1478]: New session 9 of user core. Jul 10 23:40:29.895958 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 23:40:30.015066 sshd[4003]: Connection closed by 10.0.0.1 port 38170 Jul 10 23:40:30.015857 sshd-session[4001]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:30.019427 systemd[1]: sshd@8-10.0.0.34:22-10.0.0.1:38170.service: Deactivated successfully. Jul 10 23:40:30.024162 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 23:40:30.025317 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Jul 10 23:40:30.027117 systemd-logind[1478]: Removed session 9. Jul 10 23:40:30.590430 kubelet[2622]: E0710 23:40:30.590399 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:30.591140 kubelet[2622]: E0710 23:40:30.591117 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:40:35.034501 systemd[1]: Started sshd@9-10.0.0.34:22-10.0.0.1:57828.service - OpenSSH per-connection server daemon (10.0.0.1:57828). Jul 10 23:40:35.080902 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 57828 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:35.082285 sshd-session[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:35.086880 systemd-logind[1478]: New session 10 of user core. Jul 10 23:40:35.105955 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 23:40:35.247574 sshd[4023]: Connection closed by 10.0.0.1 port 57828 Jul 10 23:40:35.248137 sshd-session[4021]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:35.252096 systemd[1]: sshd@9-10.0.0.34:22-10.0.0.1:57828.service: Deactivated successfully. Jul 10 23:40:35.254254 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 23:40:35.255341 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Jul 10 23:40:35.257170 systemd-logind[1478]: Removed session 10. Jul 10 23:40:40.260945 systemd[1]: Started sshd@10-10.0.0.34:22-10.0.0.1:57842.service - OpenSSH per-connection server daemon (10.0.0.1:57842). Jul 10 23:40:40.344794 sshd[4037]: Accepted publickey for core from 10.0.0.1 port 57842 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:40.346361 sshd-session[4037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:40.352123 systemd-logind[1478]: New session 11 of user core. Jul 10 23:40:40.365990 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 23:40:40.511477 sshd[4039]: Connection closed by 10.0.0.1 port 57842 Jul 10 23:40:40.512093 sshd-session[4037]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:40.528196 systemd[1]: sshd@10-10.0.0.34:22-10.0.0.1:57842.service: Deactivated successfully. Jul 10 23:40:40.533219 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 23:40:40.536318 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Jul 10 23:40:40.542694 systemd[1]: Started sshd@11-10.0.0.34:22-10.0.0.1:57846.service - OpenSSH per-connection server daemon (10.0.0.1:57846). Jul 10 23:40:40.544844 systemd-logind[1478]: Removed session 11. Jul 10 23:40:40.600272 sshd[4054]: Accepted publickey for core from 10.0.0.1 port 57846 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:40.601787 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:40.606394 systemd-logind[1478]: New session 12 of user core. Jul 10 23:40:40.612952 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 23:40:40.782794 sshd[4056]: Connection closed by 10.0.0.1 port 57846 Jul 10 23:40:40.783542 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:40.801570 systemd[1]: sshd@11-10.0.0.34:22-10.0.0.1:57846.service: Deactivated successfully. Jul 10 23:40:40.803644 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 23:40:40.804882 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Jul 10 23:40:40.809158 systemd[1]: Started sshd@12-10.0.0.34:22-10.0.0.1:57862.service - OpenSSH per-connection server daemon (10.0.0.1:57862). Jul 10 23:40:40.813179 systemd-logind[1478]: Removed session 12. Jul 10 23:40:40.868734 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 57862 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:40.870181 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:40.874861 systemd-logind[1478]: New session 13 of user core. Jul 10 23:40:40.885936 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 23:40:41.008833 sshd[4070]: Connection closed by 10.0.0.1 port 57862 Jul 10 23:40:41.009406 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:41.013310 systemd[1]: sshd@12-10.0.0.34:22-10.0.0.1:57862.service: Deactivated successfully. Jul 10 23:40:41.017119 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 23:40:41.017995 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Jul 10 23:40:41.019101 systemd-logind[1478]: Removed session 13. Jul 10 23:40:46.027385 systemd[1]: Started sshd@13-10.0.0.34:22-10.0.0.1:36604.service - OpenSSH per-connection server daemon (10.0.0.1:36604). Jul 10 23:40:46.089809 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 36604 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:46.090925 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:46.095838 systemd-logind[1478]: New session 14 of user core. Jul 10 23:40:46.109937 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 23:40:46.247192 sshd[4086]: Connection closed by 10.0.0.1 port 36604 Jul 10 23:40:46.247220 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:46.250974 systemd[1]: sshd@13-10.0.0.34:22-10.0.0.1:36604.service: Deactivated successfully. Jul 10 23:40:46.256108 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 23:40:46.258523 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Jul 10 23:40:46.260240 systemd-logind[1478]: Removed session 14. Jul 10 23:40:51.262640 systemd[1]: Started sshd@14-10.0.0.34:22-10.0.0.1:36608.service - OpenSSH per-connection server daemon (10.0.0.1:36608). Jul 10 23:40:51.332436 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 36608 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:51.333881 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:51.338084 systemd-logind[1478]: New session 15 of user core. Jul 10 23:40:51.352911 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 23:40:51.485852 sshd[4103]: Connection closed by 10.0.0.1 port 36608 Jul 10 23:40:51.486442 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:51.497554 systemd[1]: sshd@14-10.0.0.34:22-10.0.0.1:36608.service: Deactivated successfully. Jul 10 23:40:51.501579 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 23:40:51.502624 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Jul 10 23:40:51.505552 systemd[1]: Started sshd@15-10.0.0.34:22-10.0.0.1:36616.service - OpenSSH per-connection server daemon (10.0.0.1:36616). Jul 10 23:40:51.506279 systemd-logind[1478]: Removed session 15. Jul 10 23:40:51.563616 sshd[4117]: Accepted publickey for core from 10.0.0.1 port 36616 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:51.565165 sshd-session[4117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:51.572500 systemd-logind[1478]: New session 16 of user core. Jul 10 23:40:51.585957 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 23:40:51.846233 sshd[4119]: Connection closed by 10.0.0.1 port 36616 Jul 10 23:40:51.847152 sshd-session[4117]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:51.859344 systemd[1]: sshd@15-10.0.0.34:22-10.0.0.1:36616.service: Deactivated successfully. Jul 10 23:40:51.862190 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 23:40:51.862991 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Jul 10 23:40:51.866407 systemd[1]: Started sshd@16-10.0.0.34:22-10.0.0.1:36632.service - OpenSSH per-connection server daemon (10.0.0.1:36632). Jul 10 23:40:51.867083 systemd-logind[1478]: Removed session 16. Jul 10 23:40:51.940350 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 36632 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:51.941923 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:51.947662 systemd-logind[1478]: New session 17 of user core. Jul 10 23:40:51.961003 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 23:40:52.631198 sshd[4134]: Connection closed by 10.0.0.1 port 36632 Jul 10 23:40:52.631783 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:52.645669 systemd[1]: sshd@16-10.0.0.34:22-10.0.0.1:36632.service: Deactivated successfully. Jul 10 23:40:52.650321 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 23:40:52.652817 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Jul 10 23:40:52.657786 systemd[1]: Started sshd@17-10.0.0.34:22-10.0.0.1:57640.service - OpenSSH per-connection server daemon (10.0.0.1:57640). Jul 10 23:40:52.659914 systemd-logind[1478]: Removed session 17. Jul 10 23:40:52.719166 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 57640 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:52.720868 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:52.726449 systemd-logind[1478]: New session 18 of user core. Jul 10 23:40:52.736930 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 23:40:52.996947 sshd[4159]: Connection closed by 10.0.0.1 port 57640 Jul 10 23:40:52.999984 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:53.010659 systemd[1]: sshd@17-10.0.0.34:22-10.0.0.1:57640.service: Deactivated successfully. Jul 10 23:40:53.013525 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 23:40:53.017857 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Jul 10 23:40:53.021000 systemd[1]: Started sshd@18-10.0.0.34:22-10.0.0.1:57654.service - OpenSSH per-connection server daemon (10.0.0.1:57654). Jul 10 23:40:53.025130 systemd-logind[1478]: Removed session 18. Jul 10 23:40:53.080092 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 57654 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:53.081838 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:53.088158 systemd-logind[1478]: New session 19 of user core. Jul 10 23:40:53.096891 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 23:40:53.222851 sshd[4173]: Connection closed by 10.0.0.1 port 57654 Jul 10 23:40:53.223547 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:53.227437 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Jul 10 23:40:53.227667 systemd[1]: sshd@18-10.0.0.34:22-10.0.0.1:57654.service: Deactivated successfully. Jul 10 23:40:53.229436 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 23:40:53.230936 systemd-logind[1478]: Removed session 19. Jul 10 23:40:58.235552 systemd[1]: Started sshd@19-10.0.0.34:22-10.0.0.1:57662.service - OpenSSH per-connection server daemon (10.0.0.1:57662). Jul 10 23:40:58.289929 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 57662 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:40:58.291951 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:40:58.299541 systemd-logind[1478]: New session 20 of user core. Jul 10 23:40:58.312992 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 23:40:58.432044 sshd[4195]: Connection closed by 10.0.0.1 port 57662 Jul 10 23:40:58.432595 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 10 23:40:58.437494 systemd[1]: sshd@19-10.0.0.34:22-10.0.0.1:57662.service: Deactivated successfully. Jul 10 23:40:58.439197 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 23:40:58.440772 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Jul 10 23:40:58.442578 systemd-logind[1478]: Removed session 20. Jul 10 23:41:03.445089 systemd[1]: Started sshd@20-10.0.0.34:22-10.0.0.1:32820.service - OpenSSH per-connection server daemon (10.0.0.1:32820). Jul 10 23:41:03.518874 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 32820 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:41:03.520232 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:41:03.526949 systemd-logind[1478]: New session 21 of user core. Jul 10 23:41:03.543987 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 23:41:03.678780 sshd[4212]: Connection closed by 10.0.0.1 port 32820 Jul 10 23:41:03.677123 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jul 10 23:41:03.689320 systemd[1]: sshd@20-10.0.0.34:22-10.0.0.1:32820.service: Deactivated successfully. Jul 10 23:41:03.692370 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 23:41:03.696260 systemd-logind[1478]: Session 21 logged out. Waiting for processes to exit. Jul 10 23:41:03.700868 systemd[1]: Started sshd@21-10.0.0.34:22-10.0.0.1:32822.service - OpenSSH per-connection server daemon (10.0.0.1:32822). Jul 10 23:41:03.702248 systemd-logind[1478]: Removed session 21. Jul 10 23:41:03.765774 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 32822 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:41:03.767307 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:41:03.772404 systemd-logind[1478]: New session 22 of user core. Jul 10 23:41:03.780929 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 23:41:05.638142 containerd[1500]: time="2025-07-10T23:41:05.637679830Z" level=info msg="StopContainer for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" with timeout 30 (s)" Jul 10 23:41:05.640525 containerd[1500]: time="2025-07-10T23:41:05.640342767Z" level=info msg="Stop container \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" with signal terminated" Jul 10 23:41:05.655121 systemd[1]: cri-containerd-8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749.scope: Deactivated successfully. Jul 10 23:41:05.657815 containerd[1500]: time="2025-07-10T23:41:05.657780996Z" level=info msg="received exit event container_id:\"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" id:\"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" pid:3178 exited_at:{seconds:1752190865 nanos:657461194}" Jul 10 23:41:05.658182 containerd[1500]: time="2025-07-10T23:41:05.658152238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" id:\"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" pid:3178 exited_at:{seconds:1752190865 nanos:657461194}" Jul 10 23:41:05.661764 containerd[1500]: time="2025-07-10T23:41:05.661303338Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 23:41:05.666633 containerd[1500]: time="2025-07-10T23:41:05.666596051Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" id:\"84c94c5a07afcdf31960cf4fd901a212845d0d33e8776c5bea223c7f9aef34b5\" pid:4255 exited_at:{seconds:1752190865 nanos:666053968}" Jul 10 23:41:05.669031 containerd[1500]: time="2025-07-10T23:41:05.668992546Z" level=info msg="StopContainer for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" with timeout 2 (s)" Jul 10 23:41:05.669612 containerd[1500]: time="2025-07-10T23:41:05.669394749Z" level=info msg="Stop container \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" with signal terminated" Jul 10 23:41:05.677253 systemd-networkd[1429]: lxc_health: Link DOWN Jul 10 23:41:05.677483 systemd-networkd[1429]: lxc_health: Lost carrier Jul 10 23:41:05.683656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749-rootfs.mount: Deactivated successfully. Jul 10 23:41:05.698803 containerd[1500]: time="2025-07-10T23:41:05.698765972Z" level=info msg="StopContainer for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" returns successfully" Jul 10 23:41:05.699736 systemd[1]: cri-containerd-44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf.scope: Deactivated successfully. Jul 10 23:41:05.700042 systemd[1]: cri-containerd-44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf.scope: Consumed 7.422s CPU time, 127.7M memory peak, 144K read from disk, 12.9M written to disk. Jul 10 23:41:05.700359 containerd[1500]: time="2025-07-10T23:41:05.700313822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" id:\"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" pid:3293 exited_at:{seconds:1752190865 nanos:699954260}" Jul 10 23:41:05.700532 containerd[1500]: time="2025-07-10T23:41:05.700510303Z" level=info msg="received exit event container_id:\"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" id:\"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" pid:3293 exited_at:{seconds:1752190865 nanos:699954260}" Jul 10 23:41:05.700998 containerd[1500]: time="2025-07-10T23:41:05.700974346Z" level=info msg="StopPodSandbox for \"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\"" Jul 10 23:41:05.710880 containerd[1500]: time="2025-07-10T23:41:05.710831048Z" level=info msg="Container to stop \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:41:05.718637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf-rootfs.mount: Deactivated successfully. Jul 10 23:41:05.725909 systemd[1]: cri-containerd-da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf.scope: Deactivated successfully. Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.728999082Z" level=info msg="StopContainer for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" returns successfully" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729227483Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" id:\"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" pid:2848 exit_status:137 exited_at:{seconds:1752190865 nanos:727025469}" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729412404Z" level=info msg="StopPodSandbox for \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\"" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729465765Z" level=info msg="Container to stop \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729476925Z" level=info msg="Container to stop \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729485085Z" level=info msg="Container to stop \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729493885Z" level=info msg="Container to stop \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:41:05.729807 containerd[1500]: time="2025-07-10T23:41:05.729503165Z" level=info msg="Container to stop \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 23:41:05.734906 systemd[1]: cri-containerd-72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216.scope: Deactivated successfully. Jul 10 23:41:05.756852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf-rootfs.mount: Deactivated successfully. Jul 10 23:41:05.762380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216-rootfs.mount: Deactivated successfully. Jul 10 23:41:05.763167 containerd[1500]: time="2025-07-10T23:41:05.763077095Z" level=info msg="shim disconnected" id=da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf namespace=k8s.io Jul 10 23:41:05.783521 containerd[1500]: time="2025-07-10T23:41:05.763113815Z" level=warning msg="cleaning up after shim disconnected" id=da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf namespace=k8s.io Jul 10 23:41:05.783521 containerd[1500]: time="2025-07-10T23:41:05.783514423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:41:05.783670 containerd[1500]: time="2025-07-10T23:41:05.764541744Z" level=info msg="shim disconnected" id=72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216 namespace=k8s.io Jul 10 23:41:05.783696 containerd[1500]: time="2025-07-10T23:41:05.783639864Z" level=warning msg="cleaning up after shim disconnected" id=72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216 namespace=k8s.io Jul 10 23:41:05.783696 containerd[1500]: time="2025-07-10T23:41:05.783684984Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 23:41:05.804403 containerd[1500]: time="2025-07-10T23:41:05.804063232Z" level=info msg="received exit event sandbox_id:\"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" exit_status:137 exited_at:{seconds:1752190865 nanos:736416288}" Jul 10 23:41:05.804403 containerd[1500]: time="2025-07-10T23:41:05.804122352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" id:\"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" pid:2772 exit_status:137 exited_at:{seconds:1752190865 nanos:736416288}" Jul 10 23:41:05.804403 containerd[1500]: time="2025-07-10T23:41:05.804214793Z" level=info msg="TearDown network for sandbox \"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" successfully" Jul 10 23:41:05.804403 containerd[1500]: time="2025-07-10T23:41:05.804235873Z" level=info msg="StopPodSandbox for \"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" returns successfully" Jul 10 23:41:05.804403 containerd[1500]: time="2025-07-10T23:41:05.804300593Z" level=info msg="received exit event sandbox_id:\"da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf\" exit_status:137 exited_at:{seconds:1752190865 nanos:727025469}" Jul 10 23:41:05.804601 containerd[1500]: time="2025-07-10T23:41:05.804426994Z" level=info msg="TearDown network for sandbox \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" successfully" Jul 10 23:41:05.804601 containerd[1500]: time="2025-07-10T23:41:05.804440714Z" level=info msg="StopPodSandbox for \"72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216\" returns successfully" Jul 10 23:41:05.806251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da3355e3b0cbed0934b37d19f5251f3c1a5b7b0c9e6724fb0ac6f2647fa2e6cf-shm.mount: Deactivated successfully. Jul 10 23:41:05.806370 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72777b2be9f0397b799e8b9d6a24ead9cc2c1f76b9bf31d17cca8862f6d75216-shm.mount: Deactivated successfully. Jul 10 23:41:05.911184 kubelet[2622]: I0710 23:41:05.910935 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8704ec03-be53-43d4-92b6-cb92389446c1-clustermesh-secrets\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.911184 kubelet[2622]: I0710 23:41:05.910987 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-xtables-lock\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.911184 kubelet[2622]: I0710 23:41:05.911005 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-bpf-maps\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.911184 kubelet[2622]: I0710 23:41:05.911022 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sttkt\" (UniqueName: \"kubernetes.io/projected/0a379e5e-2e05-4fb7-81b5-154c7b297b39-kube-api-access-sttkt\") pod \"0a379e5e-2e05-4fb7-81b5-154c7b297b39\" (UID: \"0a379e5e-2e05-4fb7-81b5-154c7b297b39\") " Jul 10 23:41:05.911184 kubelet[2622]: I0710 23:41:05.911039 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-hostproc\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.911184 kubelet[2622]: I0710 23:41:05.911056 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-config-path\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912606 kubelet[2622]: I0710 23:41:05.911072 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w86pc\" (UniqueName: \"kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-kube-api-access-w86pc\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912606 kubelet[2622]: I0710 23:41:05.911087 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-cgroup\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912606 kubelet[2622]: I0710 23:41:05.911101 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-kernel\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912606 kubelet[2622]: I0710 23:41:05.911121 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-hubble-tls\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912606 kubelet[2622]: I0710 23:41:05.911135 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-etc-cni-netd\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912606 kubelet[2622]: I0710 23:41:05.911151 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a379e5e-2e05-4fb7-81b5-154c7b297b39-cilium-config-path\") pod \"0a379e5e-2e05-4fb7-81b5-154c7b297b39\" (UID: \"0a379e5e-2e05-4fb7-81b5-154c7b297b39\") " Jul 10 23:41:05.912820 kubelet[2622]: I0710 23:41:05.911168 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-lib-modules\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912820 kubelet[2622]: I0710 23:41:05.911181 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cni-path\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912820 kubelet[2622]: I0710 23:41:05.911196 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-net\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.912820 kubelet[2622]: I0710 23:41:05.911211 2622 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-run\") pod \"8704ec03-be53-43d4-92b6-cb92389446c1\" (UID: \"8704ec03-be53-43d4-92b6-cb92389446c1\") " Jul 10 23:41:05.913028 kubelet[2622]: I0710 23:41:05.913002 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.913068 kubelet[2622]: I0710 23:41:05.912999 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.913439 kubelet[2622]: I0710 23:41:05.913308 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.921890 kubelet[2622]: I0710 23:41:05.921831 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.921989 kubelet[2622]: I0710 23:41:05.921918 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-hostproc" (OuterVolumeSpecName: "hostproc") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.922658 kubelet[2622]: I0710 23:41:05.922594 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.924718 kubelet[2622]: I0710 23:41:05.922960 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.924718 kubelet[2622]: I0710 23:41:05.923042 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.924718 kubelet[2622]: I0710 23:41:05.923096 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.924718 kubelet[2622]: I0710 23:41:05.923128 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cni-path" (OuterVolumeSpecName: "cni-path") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 23:41:05.925766 kubelet[2622]: I0710 23:41:05.924904 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a379e5e-2e05-4fb7-81b5-154c7b297b39-kube-api-access-sttkt" (OuterVolumeSpecName: "kube-api-access-sttkt") pod "0a379e5e-2e05-4fb7-81b5-154c7b297b39" (UID: "0a379e5e-2e05-4fb7-81b5-154c7b297b39"). InnerVolumeSpecName "kube-api-access-sttkt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:41:05.925766 kubelet[2622]: I0710 23:41:05.925197 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:41:05.927748 kubelet[2622]: I0710 23:41:05.927684 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8704ec03-be53-43d4-92b6-cb92389446c1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 23:41:05.928331 kubelet[2622]: I0710 23:41:05.928268 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-kube-api-access-w86pc" (OuterVolumeSpecName: "kube-api-access-w86pc") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "kube-api-access-w86pc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 23:41:05.931250 kubelet[2622]: I0710 23:41:05.931195 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a379e5e-2e05-4fb7-81b5-154c7b297b39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0a379e5e-2e05-4fb7-81b5-154c7b297b39" (UID: "0a379e5e-2e05-4fb7-81b5-154c7b297b39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:41:05.932144 kubelet[2622]: I0710 23:41:05.932115 2622 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8704ec03-be53-43d4-92b6-cb92389446c1" (UID: "8704ec03-be53-43d4-92b6-cb92389446c1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 23:41:06.012010 kubelet[2622]: I0710 23:41:06.011959 2622 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012010 kubelet[2622]: I0710 23:41:06.011997 2622 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w86pc\" (UniqueName: \"kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-kube-api-access-w86pc\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012010 kubelet[2622]: I0710 23:41:06.012007 2622 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012010 kubelet[2622]: I0710 23:41:06.012015 2622 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012010 kubelet[2622]: I0710 23:41:06.012027 2622 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8704ec03-be53-43d4-92b6-cb92389446c1-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012035 2622 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012047 2622 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0a379e5e-2e05-4fb7-81b5-154c7b297b39-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012055 2622 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012063 2622 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012071 2622 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012079 2622 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012088 2622 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8704ec03-be53-43d4-92b6-cb92389446c1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012234 kubelet[2622]: I0710 23:41:06.012096 2622 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012426 kubelet[2622]: I0710 23:41:06.012105 2622 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012426 kubelet[2622]: I0710 23:41:06.012112 2622 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sttkt\" (UniqueName: \"kubernetes.io/projected/0a379e5e-2e05-4fb7-81b5-154c7b297b39-kube-api-access-sttkt\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.012426 kubelet[2622]: I0710 23:41:06.012119 2622 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8704ec03-be53-43d4-92b6-cb92389446c1-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 10 23:41:06.682101 kubelet[2622]: I0710 23:41:06.682066 2622 scope.go:117] "RemoveContainer" containerID="44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf" Jul 10 23:41:06.685131 systemd[1]: var-lib-kubelet-pods-0a379e5e\x2d2e05\x2d4fb7\x2d81b5\x2d154c7b297b39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsttkt.mount: Deactivated successfully. Jul 10 23:41:06.685534 systemd[1]: var-lib-kubelet-pods-8704ec03\x2dbe53\x2d43d4\x2d92b6\x2dcb92389446c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw86pc.mount: Deactivated successfully. Jul 10 23:41:06.685593 systemd[1]: var-lib-kubelet-pods-8704ec03\x2dbe53\x2d43d4\x2d92b6\x2dcb92389446c1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 23:41:06.685643 systemd[1]: var-lib-kubelet-pods-8704ec03\x2dbe53\x2d43d4\x2d92b6\x2dcb92389446c1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 23:41:06.687205 containerd[1500]: time="2025-07-10T23:41:06.687098483Z" level=info msg="RemoveContainer for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\"" Jul 10 23:41:06.690676 systemd[1]: Removed slice kubepods-burstable-pod8704ec03_be53_43d4_92b6_cb92389446c1.slice - libcontainer container kubepods-burstable-pod8704ec03_be53_43d4_92b6_cb92389446c1.slice. Jul 10 23:41:06.690949 systemd[1]: kubepods-burstable-pod8704ec03_be53_43d4_92b6_cb92389446c1.slice: Consumed 7.615s CPU time, 128M memory peak, 676K read from disk, 12.9M written to disk. Jul 10 23:41:06.692792 systemd[1]: Removed slice kubepods-besteffort-pod0a379e5e_2e05_4fb7_81b5_154c7b297b39.slice - libcontainer container kubepods-besteffort-pod0a379e5e_2e05_4fb7_81b5_154c7b297b39.slice. Jul 10 23:41:06.706186 containerd[1500]: time="2025-07-10T23:41:06.705993358Z" level=info msg="RemoveContainer for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" returns successfully" Jul 10 23:41:06.714487 kubelet[2622]: I0710 23:41:06.714437 2622 scope.go:117] "RemoveContainer" containerID="5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0" Jul 10 23:41:06.718743 containerd[1500]: time="2025-07-10T23:41:06.718160352Z" level=info msg="RemoveContainer for \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\"" Jul 10 23:41:06.723728 containerd[1500]: time="2025-07-10T23:41:06.723680026Z" level=info msg="RemoveContainer for \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" returns successfully" Jul 10 23:41:06.723939 kubelet[2622]: I0710 23:41:06.723904 2622 scope.go:117] "RemoveContainer" containerID="deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0" Jul 10 23:41:06.725979 containerd[1500]: time="2025-07-10T23:41:06.725944439Z" level=info msg="RemoveContainer for \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\"" Jul 10 23:41:06.729453 containerd[1500]: time="2025-07-10T23:41:06.729418421Z" level=info msg="RemoveContainer for \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" returns successfully" Jul 10 23:41:06.729613 kubelet[2622]: I0710 23:41:06.729582 2622 scope.go:117] "RemoveContainer" containerID="40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77" Jul 10 23:41:06.731020 containerd[1500]: time="2025-07-10T23:41:06.730990430Z" level=info msg="RemoveContainer for \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\"" Jul 10 23:41:06.736783 containerd[1500]: time="2025-07-10T23:41:06.736745185Z" level=info msg="RemoveContainer for \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" returns successfully" Jul 10 23:41:06.737007 kubelet[2622]: I0710 23:41:06.736966 2622 scope.go:117] "RemoveContainer" containerID="af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325" Jul 10 23:41:06.738480 containerd[1500]: time="2025-07-10T23:41:06.738451316Z" level=info msg="RemoveContainer for \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\"" Jul 10 23:41:06.741087 containerd[1500]: time="2025-07-10T23:41:06.741055211Z" level=info msg="RemoveContainer for \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" returns successfully" Jul 10 23:41:06.741254 kubelet[2622]: I0710 23:41:06.741220 2622 scope.go:117] "RemoveContainer" containerID="44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf" Jul 10 23:41:06.741460 containerd[1500]: time="2025-07-10T23:41:06.741423534Z" level=error msg="ContainerStatus for \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\": not found" Jul 10 23:41:06.747153 kubelet[2622]: E0710 23:41:06.747100 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\": not found" containerID="44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf" Jul 10 23:41:06.747328 kubelet[2622]: I0710 23:41:06.747271 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf"} err="failed to get container status \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"44473c50481a03680adbc1e054d0eca9ae850e94fd80e5e842bd6d906c1773cf\": not found" Jul 10 23:41:06.747386 kubelet[2622]: I0710 23:41:06.747375 2622 scope.go:117] "RemoveContainer" containerID="5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0" Jul 10 23:41:06.747723 containerd[1500]: time="2025-07-10T23:41:06.747665732Z" level=error msg="ContainerStatus for \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\": not found" Jul 10 23:41:06.748029 kubelet[2622]: E0710 23:41:06.747910 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\": not found" containerID="5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0" Jul 10 23:41:06.748029 kubelet[2622]: I0710 23:41:06.747940 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0"} err="failed to get container status \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\": rpc error: code = NotFound desc = an error occurred when try to find container \"5191c5926559c24901adfa4c54f0ed4a70809e1d30a37be7bffd5a37adbb6de0\": not found" Jul 10 23:41:06.748029 kubelet[2622]: I0710 23:41:06.747954 2622 scope.go:117] "RemoveContainer" containerID="deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0" Jul 10 23:41:06.748193 containerd[1500]: time="2025-07-10T23:41:06.748141175Z" level=error msg="ContainerStatus for \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\": not found" Jul 10 23:41:06.748379 kubelet[2622]: E0710 23:41:06.748256 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\": not found" containerID="deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0" Jul 10 23:41:06.748379 kubelet[2622]: I0710 23:41:06.748283 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0"} err="failed to get container status \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"deb1a64872dfb3436ed6d548a7ce178080cf2aa75dc3bbfd3141c4a8535430d0\": not found" Jul 10 23:41:06.748379 kubelet[2622]: I0710 23:41:06.748309 2622 scope.go:117] "RemoveContainer" containerID="40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77" Jul 10 23:41:06.748525 containerd[1500]: time="2025-07-10T23:41:06.748466617Z" level=error msg="ContainerStatus for \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\": not found" Jul 10 23:41:06.748810 kubelet[2622]: E0710 23:41:06.748634 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\": not found" containerID="40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77" Jul 10 23:41:06.748810 kubelet[2622]: I0710 23:41:06.748666 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77"} err="failed to get container status \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\": rpc error: code = NotFound desc = an error occurred when try to find container \"40e3d655875c87e951133215889c3290080cb6f1ff403d4a389d805db1db7e77\": not found" Jul 10 23:41:06.748810 kubelet[2622]: I0710 23:41:06.748689 2622 scope.go:117] "RemoveContainer" containerID="af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325" Jul 10 23:41:06.749359 containerd[1500]: time="2025-07-10T23:41:06.749087220Z" level=error msg="ContainerStatus for \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\": not found" Jul 10 23:41:06.749490 kubelet[2622]: E0710 23:41:06.749218 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\": not found" containerID="af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325" Jul 10 23:41:06.749490 kubelet[2622]: I0710 23:41:06.749245 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325"} err="failed to get container status \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\": rpc error: code = NotFound desc = an error occurred when try to find container \"af48d08accb0497535dda58de4d3a2d3a40c434705ecae350d2d2dfb26313325\": not found" Jul 10 23:41:06.749490 kubelet[2622]: I0710 23:41:06.749258 2622 scope.go:117] "RemoveContainer" containerID="8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749" Jul 10 23:41:06.750749 containerd[1500]: time="2025-07-10T23:41:06.750695430Z" level=info msg="RemoveContainer for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\"" Jul 10 23:41:06.753486 containerd[1500]: time="2025-07-10T23:41:06.753459127Z" level=info msg="RemoveContainer for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" returns successfully" Jul 10 23:41:06.753662 kubelet[2622]: I0710 23:41:06.753637 2622 scope.go:117] "RemoveContainer" containerID="8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749" Jul 10 23:41:06.754016 containerd[1500]: time="2025-07-10T23:41:06.753945530Z" level=error msg="ContainerStatus for \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\": not found" Jul 10 23:41:06.754185 kubelet[2622]: E0710 23:41:06.754153 2622 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\": not found" containerID="8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749" Jul 10 23:41:06.754230 kubelet[2622]: I0710 23:41:06.754185 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749"} err="failed to get container status \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d48952159bfe19f97d7dcf007b0bc43b2317c82ed8aa98cf4f89601625df749\": not found" Jul 10 23:41:07.437000 kubelet[2622]: I0710 23:41:07.436955 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a379e5e-2e05-4fb7-81b5-154c7b297b39" path="/var/lib/kubelet/pods/0a379e5e-2e05-4fb7-81b5-154c7b297b39/volumes" Jul 10 23:41:07.437355 kubelet[2622]: I0710 23:41:07.437332 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8704ec03-be53-43d4-92b6-cb92389446c1" path="/var/lib/kubelet/pods/8704ec03-be53-43d4-92b6-cb92389446c1/volumes" Jul 10 23:41:07.591787 sshd[4227]: Connection closed by 10.0.0.1 port 32822 Jul 10 23:41:07.593343 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jul 10 23:41:07.610693 systemd[1]: sshd@21-10.0.0.34:22-10.0.0.1:32822.service: Deactivated successfully. Jul 10 23:41:07.614845 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 23:41:07.616945 systemd[1]: session-22.scope: Consumed 1.155s CPU time, 23.9M memory peak. Jul 10 23:41:07.618120 systemd-logind[1478]: Session 22 logged out. Waiting for processes to exit. Jul 10 23:41:07.623669 systemd[1]: Started sshd@22-10.0.0.34:22-10.0.0.1:32834.service - OpenSSH per-connection server daemon (10.0.0.1:32834). Jul 10 23:41:07.624524 systemd-logind[1478]: Removed session 22. Jul 10 23:41:07.683491 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 32834 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:41:07.685590 sshd-session[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:41:07.691619 systemd-logind[1478]: New session 23 of user core. Jul 10 23:41:07.704911 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 23:41:08.712191 sshd[4382]: Connection closed by 10.0.0.1 port 32834 Jul 10 23:41:08.712275 sshd-session[4380]: pam_unix(sshd:session): session closed for user core Jul 10 23:41:08.730412 systemd[1]: sshd@22-10.0.0.34:22-10.0.0.1:32834.service: Deactivated successfully. Jul 10 23:41:08.732390 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 23:41:08.736773 systemd-logind[1478]: Session 23 logged out. Waiting for processes to exit. Jul 10 23:41:08.743993 systemd[1]: Started sshd@23-10.0.0.34:22-10.0.0.1:32844.service - OpenSSH per-connection server daemon (10.0.0.1:32844). Jul 10 23:41:08.747019 systemd-logind[1478]: Removed session 23. Jul 10 23:41:08.756737 systemd[1]: Created slice kubepods-burstable-podadfe0b4d_7a74_449e_bef1_62efadbfb1a6.slice - libcontainer container kubepods-burstable-podadfe0b4d_7a74_449e_bef1_62efadbfb1a6.slice. Jul 10 23:41:08.805718 sshd[4395]: Accepted publickey for core from 10.0.0.1 port 32844 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:41:08.808972 sshd-session[4395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:41:08.815919 systemd-logind[1478]: New session 24 of user core. Jul 10 23:41:08.823885 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 10 23:41:08.830432 kubelet[2622]: I0710 23:41:08.830396 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-host-proc-sys-net\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830685 kubelet[2622]: I0710 23:41:08.830435 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-hostproc\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830685 kubelet[2622]: I0710 23:41:08.830458 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr89t\" (UniqueName: \"kubernetes.io/projected/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-kube-api-access-cr89t\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830685 kubelet[2622]: I0710 23:41:08.830474 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-bpf-maps\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830685 kubelet[2622]: I0710 23:41:08.830488 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-cilium-cgroup\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830685 kubelet[2622]: I0710 23:41:08.830502 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-etc-cni-netd\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830685 kubelet[2622]: I0710 23:41:08.830516 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-cilium-ipsec-secrets\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830853 kubelet[2622]: I0710 23:41:08.830532 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-cni-path\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830853 kubelet[2622]: I0710 23:41:08.830547 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-xtables-lock\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830853 kubelet[2622]: I0710 23:41:08.830562 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-cilium-config-path\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830853 kubelet[2622]: I0710 23:41:08.830576 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-lib-modules\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830853 kubelet[2622]: I0710 23:41:08.830590 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-clustermesh-secrets\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830853 kubelet[2622]: I0710 23:41:08.830604 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-cilium-run\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830967 kubelet[2622]: I0710 23:41:08.830620 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-host-proc-sys-kernel\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.830967 kubelet[2622]: I0710 23:41:08.830637 2622 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/adfe0b4d-7a74-449e-bef1-62efadbfb1a6-hubble-tls\") pod \"cilium-g6kjl\" (UID: \"adfe0b4d-7a74-449e-bef1-62efadbfb1a6\") " pod="kube-system/cilium-g6kjl" Jul 10 23:41:08.873946 sshd[4397]: Connection closed by 10.0.0.1 port 32844 Jul 10 23:41:08.874359 sshd-session[4395]: pam_unix(sshd:session): session closed for user core Jul 10 23:41:08.899749 systemd[1]: sshd@23-10.0.0.34:22-10.0.0.1:32844.service: Deactivated successfully. Jul 10 23:41:08.902249 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 23:41:08.903219 systemd-logind[1478]: Session 24 logged out. Waiting for processes to exit. Jul 10 23:41:08.906770 systemd-logind[1478]: Removed session 24. Jul 10 23:41:08.910130 systemd[1]: Started sshd@24-10.0.0.34:22-10.0.0.1:32854.service - OpenSSH per-connection server daemon (10.0.0.1:32854). Jul 10 23:41:08.971763 sshd[4404]: Accepted publickey for core from 10.0.0.1 port 32854 ssh2: RSA SHA256:WeUQKeUHIYQBEC6vd2p1LygcOYX3O2m1zuoI/cCo1DA Jul 10 23:41:08.973199 sshd-session[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 23:41:08.977385 systemd-logind[1478]: New session 25 of user core. Jul 10 23:41:08.988936 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 10 23:41:09.060795 kubelet[2622]: E0710 23:41:09.060735 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:09.061452 containerd[1500]: time="2025-07-10T23:41:09.061405822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6kjl,Uid:adfe0b4d-7a74-449e-bef1-62efadbfb1a6,Namespace:kube-system,Attempt:0,}" Jul 10 23:41:09.078043 containerd[1500]: time="2025-07-10T23:41:09.077931595Z" level=info msg="connecting to shim bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11" address="unix:///run/containerd/s/8c4ae45757503cac996d3cb003c76842e95b0415ef8b4cd9a8015ab7efc18e15" namespace=k8s.io protocol=ttrpc version=3 Jul 10 23:41:09.114932 systemd[1]: Started cri-containerd-bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11.scope - libcontainer container bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11. Jul 10 23:41:09.138954 containerd[1500]: time="2025-07-10T23:41:09.138913337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g6kjl,Uid:adfe0b4d-7a74-449e-bef1-62efadbfb1a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\"" Jul 10 23:41:09.139617 kubelet[2622]: E0710 23:41:09.139594 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:09.144869 containerd[1500]: time="2025-07-10T23:41:09.144831130Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 23:41:09.151559 containerd[1500]: time="2025-07-10T23:41:09.151512368Z" level=info msg="Container 6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:41:09.156879 containerd[1500]: time="2025-07-10T23:41:09.156825957Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\"" Jul 10 23:41:09.157539 containerd[1500]: time="2025-07-10T23:41:09.157376761Z" level=info msg="StartContainer for \"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\"" Jul 10 23:41:09.158350 containerd[1500]: time="2025-07-10T23:41:09.158319086Z" level=info msg="connecting to shim 6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467" address="unix:///run/containerd/s/8c4ae45757503cac996d3cb003c76842e95b0415ef8b4cd9a8015ab7efc18e15" protocol=ttrpc version=3 Jul 10 23:41:09.179950 systemd[1]: Started cri-containerd-6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467.scope - libcontainer container 6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467. Jul 10 23:41:09.217960 containerd[1500]: time="2025-07-10T23:41:09.217908780Z" level=info msg="StartContainer for \"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\" returns successfully" Jul 10 23:41:09.234359 systemd[1]: cri-containerd-6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467.scope: Deactivated successfully. Jul 10 23:41:09.238178 containerd[1500]: time="2025-07-10T23:41:09.238131214Z" level=info msg="received exit event container_id:\"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\" id:\"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\" pid:4476 exited_at:{seconds:1752190869 nanos:237900092}" Jul 10 23:41:09.238429 containerd[1500]: time="2025-07-10T23:41:09.238409895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\" id:\"6c85d779b9b83af257f0b0ac3b3c11e311e7e1378d59d1a7864923c733177467\" pid:4476 exited_at:{seconds:1752190869 nanos:237900092}" Jul 10 23:41:09.693821 kubelet[2622]: E0710 23:41:09.693782 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:09.712683 containerd[1500]: time="2025-07-10T23:41:09.712630075Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 23:41:09.719826 containerd[1500]: time="2025-07-10T23:41:09.719772635Z" level=info msg="Container efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:41:09.774746 containerd[1500]: time="2025-07-10T23:41:09.774661623Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\"" Jul 10 23:41:09.775211 containerd[1500]: time="2025-07-10T23:41:09.775187866Z" level=info msg="StartContainer for \"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\"" Jul 10 23:41:09.776652 containerd[1500]: time="2025-07-10T23:41:09.776608714Z" level=info msg="connecting to shim efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb" address="unix:///run/containerd/s/8c4ae45757503cac996d3cb003c76842e95b0415ef8b4cd9a8015ab7efc18e15" protocol=ttrpc version=3 Jul 10 23:41:09.796933 systemd[1]: Started cri-containerd-efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb.scope - libcontainer container efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb. Jul 10 23:41:09.830605 containerd[1500]: time="2025-07-10T23:41:09.830568977Z" level=info msg="StartContainer for \"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\" returns successfully" Jul 10 23:41:09.837048 systemd[1]: cri-containerd-efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb.scope: Deactivated successfully. Jul 10 23:41:09.838310 containerd[1500]: time="2025-07-10T23:41:09.838072459Z" level=info msg="TaskExit event in podsandbox handler container_id:\"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\" id:\"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\" pid:4520 exited_at:{seconds:1752190869 nanos:837688737}" Jul 10 23:41:09.843175 containerd[1500]: time="2025-07-10T23:41:09.843124127Z" level=info msg="received exit event container_id:\"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\" id:\"efa8abf566616a3dae399b4e0e4f3a7f54c13ab3b707ae6b90d57335ba1260bb\" pid:4520 exited_at:{seconds:1752190869 nanos:837688737}" Jul 10 23:41:10.504797 kubelet[2622]: E0710 23:41:10.504740 2622 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 23:41:10.698249 kubelet[2622]: E0710 23:41:10.698199 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:10.705664 containerd[1500]: time="2025-07-10T23:41:10.705526259Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 23:41:10.732685 containerd[1500]: time="2025-07-10T23:41:10.732460446Z" level=info msg="Container dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:41:10.742226 containerd[1500]: time="2025-07-10T23:41:10.742161859Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\"" Jul 10 23:41:10.744733 containerd[1500]: time="2025-07-10T23:41:10.742895543Z" level=info msg="StartContainer for \"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\"" Jul 10 23:41:10.744733 containerd[1500]: time="2025-07-10T23:41:10.744398871Z" level=info msg="connecting to shim dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4" address="unix:///run/containerd/s/8c4ae45757503cac996d3cb003c76842e95b0415ef8b4cd9a8015ab7efc18e15" protocol=ttrpc version=3 Jul 10 23:41:10.774972 systemd[1]: Started cri-containerd-dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4.scope - libcontainer container dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4. Jul 10 23:41:10.810654 systemd[1]: cri-containerd-dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4.scope: Deactivated successfully. Jul 10 23:41:10.811675 containerd[1500]: time="2025-07-10T23:41:10.811594358Z" level=info msg="received exit event container_id:\"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\" id:\"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\" pid:4565 exited_at:{seconds:1752190870 nanos:811358716}" Jul 10 23:41:10.811897 containerd[1500]: time="2025-07-10T23:41:10.811866719Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\" id:\"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\" pid:4565 exited_at:{seconds:1752190870 nanos:811358716}" Jul 10 23:41:10.812153 containerd[1500]: time="2025-07-10T23:41:10.812128401Z" level=info msg="StartContainer for \"dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4\" returns successfully" Jul 10 23:41:10.832655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff059b6b9b8bcb15e3bdf2e5bb4aeada279e0805f4cdba358b9dc5bff035dc4-rootfs.mount: Deactivated successfully. Jul 10 23:41:11.704961 kubelet[2622]: E0710 23:41:11.704633 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:11.725337 containerd[1500]: time="2025-07-10T23:41:11.725280520Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 23:41:11.738943 containerd[1500]: time="2025-07-10T23:41:11.738901032Z" level=info msg="Container 2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:41:11.744881 containerd[1500]: time="2025-07-10T23:41:11.744838664Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\"" Jul 10 23:41:11.746567 containerd[1500]: time="2025-07-10T23:41:11.745416147Z" level=info msg="StartContainer for \"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\"" Jul 10 23:41:11.746567 containerd[1500]: time="2025-07-10T23:41:11.746267232Z" level=info msg="connecting to shim 2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7" address="unix:///run/containerd/s/8c4ae45757503cac996d3cb003c76842e95b0415ef8b4cd9a8015ab7efc18e15" protocol=ttrpc version=3 Jul 10 23:41:11.776038 systemd[1]: Started cri-containerd-2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7.scope - libcontainer container 2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7. Jul 10 23:41:11.809735 systemd[1]: cri-containerd-2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7.scope: Deactivated successfully. Jul 10 23:41:11.812550 containerd[1500]: time="2025-07-10T23:41:11.812498823Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\" id:\"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\" pid:4604 exited_at:{seconds:1752190871 nanos:812232582}" Jul 10 23:41:11.816103 containerd[1500]: time="2025-07-10T23:41:11.816049202Z" level=info msg="received exit event container_id:\"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\" id:\"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\" pid:4604 exited_at:{seconds:1752190871 nanos:812232582}" Jul 10 23:41:11.824026 containerd[1500]: time="2025-07-10T23:41:11.823921644Z" level=info msg="StartContainer for \"2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7\" returns successfully" Jul 10 23:41:11.840898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2cca63c7e4da004233e13f078ec25e35e59980ec55ef47959e12934c14a100a7-rootfs.mount: Deactivated successfully. Jul 10 23:41:12.724527 kubelet[2622]: E0710 23:41:12.724435 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:12.734940 containerd[1500]: time="2025-07-10T23:41:12.734883941Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 23:41:12.763557 containerd[1500]: time="2025-07-10T23:41:12.761690640Z" level=info msg="Container 1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264: CDI devices from CRI Config.CDIDevices: []" Jul 10 23:41:12.773791 containerd[1500]: time="2025-07-10T23:41:12.773751742Z" level=info msg="CreateContainer within sandbox \"bd5e513ccf30767c8519115b883173206615a9a344b14f6eec2688f43e4d4e11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\"" Jul 10 23:41:12.774514 containerd[1500]: time="2025-07-10T23:41:12.774452666Z" level=info msg="StartContainer for \"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\"" Jul 10 23:41:12.781967 containerd[1500]: time="2025-07-10T23:41:12.781695303Z" level=info msg="connecting to shim 1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264" address="unix:///run/containerd/s/8c4ae45757503cac996d3cb003c76842e95b0415ef8b4cd9a8015ab7efc18e15" protocol=ttrpc version=3 Jul 10 23:41:12.802051 systemd[1]: Started cri-containerd-1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264.scope - libcontainer container 1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264. Jul 10 23:41:12.843994 containerd[1500]: time="2025-07-10T23:41:12.843955745Z" level=info msg="StartContainer for \"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\" returns successfully" Jul 10 23:41:12.909285 containerd[1500]: time="2025-07-10T23:41:12.909244363Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\" id:\"e3bd8b802a7619e0535a02355c851a36ca89842915099ac525bdef50bd6769f3\" pid:4676 exited_at:{seconds:1752190872 nanos:908935321}" Jul 10 23:41:13.181763 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 10 23:41:13.731647 kubelet[2622]: E0710 23:41:13.731617 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:15.063048 kubelet[2622]: E0710 23:41:15.062645 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:15.446764 containerd[1500]: time="2025-07-10T23:41:15.446272983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\" id:\"77af51a186f39d0c0b0db36d8fe581e4d72ccf3107c6f48b7fb95613dbe0a5d9\" pid:4954 exit_status:1 exited_at:{seconds:1752190875 nanos:445245458}" Jul 10 23:41:16.216072 systemd-networkd[1429]: lxc_health: Link UP Jul 10 23:41:16.228925 systemd-networkd[1429]: lxc_health: Gained carrier Jul 10 23:41:17.066009 kubelet[2622]: E0710 23:41:17.065961 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:17.102449 kubelet[2622]: I0710 23:41:17.102367 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g6kjl" podStartSLOduration=9.102348982 podStartE2EDuration="9.102348982s" podCreationTimestamp="2025-07-10 23:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 23:41:13.748085519 +0000 UTC m=+78.435877706" watchObservedRunningTime="2025-07-10 23:41:17.102348982 +0000 UTC m=+81.790141249" Jul 10 23:41:17.386868 systemd-networkd[1429]: lxc_health: Gained IPv6LL Jul 10 23:41:17.606736 containerd[1500]: time="2025-07-10T23:41:17.606676066Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\" id:\"8882af7b390cefe91d0b706bd8f9de6b7c0c72a883d4fb085ca0a006f772c95d\" pid:5215 exited_at:{seconds:1752190877 nanos:606033544}" Jul 10 23:41:17.740019 kubelet[2622]: E0710 23:41:17.739859 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 23:41:19.734376 containerd[1500]: time="2025-07-10T23:41:19.734331981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\" id:\"c4aecc7ce1bb30e270965980219ba84bd7dc7e5105ae78a3bc522fdac7268352\" pid:5242 exited_at:{seconds:1752190879 nanos:733870619}" Jul 10 23:41:19.736823 kubelet[2622]: E0710 23:41:19.736787 2622 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41186->127.0.0.1:33459: write tcp 127.0.0.1:41186->127.0.0.1:33459: write: broken pipe Jul 10 23:41:21.876495 containerd[1500]: time="2025-07-10T23:41:21.876387656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d48b92d35fcc1e52164bc49c86e4286ab15840916c0732521b6cc5f5b056264\" id:\"3c075968e3a69d0543d8605dcaec622540dc3fbac1deaa369cf217a12cd96d0a\" pid:5274 exited_at:{seconds:1752190881 nanos:875922494}" Jul 10 23:41:21.882240 sshd[4410]: Connection closed by 10.0.0.1 port 32854 Jul 10 23:41:21.882815 sshd-session[4404]: pam_unix(sshd:session): session closed for user core Jul 10 23:41:21.886631 systemd[1]: sshd@24-10.0.0.34:22-10.0.0.1:32854.service: Deactivated successfully. Jul 10 23:41:21.888556 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 23:41:21.889378 systemd-logind[1478]: Session 25 logged out. Waiting for processes to exit. Jul 10 23:41:21.890439 systemd-logind[1478]: Removed session 25. Jul 10 23:41:23.439217 kubelet[2622]: E0710 23:41:23.439173 2622 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"