May 13 12:43:03.825304 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 12:43:03.825325 kernel: Linux version 6.12.28-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 13 11:28:23 -00 2025 May 13 12:43:03.825334 kernel: KASLR enabled May 13 12:43:03.825339 kernel: efi: EFI v2.7 by EDK II May 13 12:43:03.825345 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 13 12:43:03.825350 kernel: random: crng init done May 13 12:43:03.825357 kernel: secureboot: Secure boot disabled May 13 12:43:03.825363 kernel: ACPI: Early table checksum verification disabled May 13 12:43:03.825368 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 13 12:43:03.825375 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 12:43:03.825381 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825386 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825392 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825398 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825405 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825412 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825418 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825424 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825430 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 12:43:03.825436 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 12:43:03.825442 kernel: ACPI: Use ACPI SPCR as default console: Yes May 13 12:43:03.825448 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:43:03.825454 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 13 12:43:03.825460 kernel: Zone ranges: May 13 12:43:03.825466 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:43:03.825474 kernel: DMA32 empty May 13 12:43:03.825479 kernel: Normal empty May 13 12:43:03.825485 kernel: Device empty May 13 12:43:03.825491 kernel: Movable zone start for each node May 13 12:43:03.825497 kernel: Early memory node ranges May 13 12:43:03.825503 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 13 12:43:03.825509 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 13 12:43:03.825515 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 13 12:43:03.825521 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 13 12:43:03.825527 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 13 12:43:03.825532 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 13 12:43:03.825538 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 13 12:43:03.825545 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 13 12:43:03.825551 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 13 12:43:03.825557 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 12:43:03.825566 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 12:43:03.825572 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 12:43:03.825579 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 12:43:03.825596 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 12:43:03.825603 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 12:43:03.825609 kernel: psci: probing for conduit method from ACPI. May 13 12:43:03.825615 kernel: psci: PSCIv1.1 detected in firmware. May 13 12:43:03.825622 kernel: psci: Using standard PSCI v0.2 function IDs May 13 12:43:03.825628 kernel: psci: Trusted OS migration not required May 13 12:43:03.825634 kernel: psci: SMC Calling Convention v1.1 May 13 12:43:03.825640 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 12:43:03.825647 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 13 12:43:03.825653 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 13 12:43:03.825662 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 12:43:03.825668 kernel: Detected PIPT I-cache on CPU0 May 13 12:43:03.825674 kernel: CPU features: detected: GIC system register CPU interface May 13 12:43:03.825681 kernel: CPU features: detected: Spectre-v4 May 13 12:43:03.825687 kernel: CPU features: detected: Spectre-BHB May 13 12:43:03.825693 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 12:43:03.825699 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 12:43:03.825706 kernel: CPU features: detected: ARM erratum 1418040 May 13 12:43:03.825712 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 12:43:03.825718 kernel: alternatives: applying boot alternatives May 13 12:43:03.825725 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:43:03.825733 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 12:43:03.825740 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 12:43:03.825746 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 12:43:03.825753 kernel: Fallback order for Node 0: 0 May 13 12:43:03.825759 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 13 12:43:03.825765 kernel: Policy zone: DMA May 13 12:43:03.825771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 12:43:03.825778 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 13 12:43:03.825784 kernel: software IO TLB: area num 4. May 13 12:43:03.825790 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 13 12:43:03.825796 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 13 12:43:03.825803 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 12:43:03.825811 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 12:43:03.825817 kernel: rcu: RCU event tracing is enabled. May 13 12:43:03.825824 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 12:43:03.825830 kernel: Trampoline variant of Tasks RCU enabled. May 13 12:43:03.825837 kernel: Tracing variant of Tasks RCU enabled. May 13 12:43:03.825843 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 12:43:03.825849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 12:43:03.825856 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:43:03.825862 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 12:43:03.825869 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 12:43:03.825875 kernel: GICv3: 256 SPIs implemented May 13 12:43:03.825883 kernel: GICv3: 0 Extended SPIs implemented May 13 12:43:03.825889 kernel: Root IRQ handler: gic_handle_irq May 13 12:43:03.825895 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 12:43:03.825902 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 13 12:43:03.825908 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 12:43:03.825914 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 12:43:03.825921 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 13 12:43:03.825927 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 13 12:43:03.825934 kernel: GICv3: using LPI property table @0x0000000040100000 May 13 12:43:03.825940 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 13 12:43:03.825947 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 12:43:03.825953 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:43:03.825961 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 12:43:03.825967 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 12:43:03.825974 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 12:43:03.825980 kernel: arm-pv: using stolen time PV May 13 12:43:03.825987 kernel: Console: colour dummy device 80x25 May 13 12:43:03.825993 kernel: ACPI: Core revision 20240827 May 13 12:43:03.826000 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 12:43:03.826007 kernel: pid_max: default: 32768 minimum: 301 May 13 12:43:03.826013 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 13 12:43:03.826021 kernel: landlock: Up and running. May 13 12:43:03.826027 kernel: SELinux: Initializing. May 13 12:43:03.826034 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:43:03.826040 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 12:43:03.826047 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 12:43:03.826054 kernel: rcu: Hierarchical SRCU implementation. May 13 12:43:03.826061 kernel: rcu: Max phase no-delay instances is 400. May 13 12:43:03.826067 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 13 12:43:03.826074 kernel: Remapping and enabling EFI services. May 13 12:43:03.826081 kernel: smp: Bringing up secondary CPUs ... May 13 12:43:03.826092 kernel: Detected PIPT I-cache on CPU1 May 13 12:43:03.826099 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 12:43:03.826107 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 13 12:43:03.826114 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:43:03.826121 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 12:43:03.826128 kernel: Detected PIPT I-cache on CPU2 May 13 12:43:03.826135 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 12:43:03.826142 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 13 12:43:03.826150 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:43:03.826156 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 12:43:03.826163 kernel: Detected PIPT I-cache on CPU3 May 13 12:43:03.826170 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 12:43:03.826177 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 13 12:43:03.826184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 12:43:03.826200 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 12:43:03.826208 kernel: smp: Brought up 1 node, 4 CPUs May 13 12:43:03.826215 kernel: SMP: Total of 4 processors activated. May 13 12:43:03.826224 kernel: CPU: All CPU(s) started at EL1 May 13 12:43:03.826231 kernel: CPU features: detected: 32-bit EL0 Support May 13 12:43:03.826238 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 12:43:03.826245 kernel: CPU features: detected: Common not Private translations May 13 12:43:03.826252 kernel: CPU features: detected: CRC32 instructions May 13 12:43:03.826258 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 12:43:03.826265 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 12:43:03.826272 kernel: CPU features: detected: LSE atomic instructions May 13 12:43:03.826279 kernel: CPU features: detected: Privileged Access Never May 13 12:43:03.826288 kernel: CPU features: detected: RAS Extension Support May 13 12:43:03.826295 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 12:43:03.826302 kernel: alternatives: applying system-wide alternatives May 13 12:43:03.826309 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 13 12:43:03.826316 kernel: Memory: 2440920K/2572288K available (11072K kernel code, 2276K rwdata, 8932K rodata, 39488K init, 1034K bss, 125600K reserved, 0K cma-reserved) May 13 12:43:03.826323 kernel: devtmpfs: initialized May 13 12:43:03.826330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 12:43:03.826336 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 12:43:03.826343 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 12:43:03.826351 kernel: 0 pages in range for non-PLT usage May 13 12:43:03.826358 kernel: 508528 pages in range for PLT usage May 13 12:43:03.826365 kernel: pinctrl core: initialized pinctrl subsystem May 13 12:43:03.826372 kernel: SMBIOS 3.0.0 present. May 13 12:43:03.826379 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 12:43:03.826385 kernel: DMI: Memory slots populated: 1/1 May 13 12:43:03.826392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 12:43:03.826399 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 12:43:03.826406 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 12:43:03.826414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 12:43:03.826421 kernel: audit: initializing netlink subsys (disabled) May 13 12:43:03.826428 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 13 12:43:03.826435 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 12:43:03.826442 kernel: cpuidle: using governor menu May 13 12:43:03.826449 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 12:43:03.826456 kernel: ASID allocator initialised with 32768 entries May 13 12:43:03.826463 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 12:43:03.826469 kernel: Serial: AMBA PL011 UART driver May 13 12:43:03.826478 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 12:43:03.826485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 12:43:03.826491 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 12:43:03.826498 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 12:43:03.826505 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 12:43:03.826512 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 12:43:03.826519 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 12:43:03.826526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 12:43:03.826532 kernel: ACPI: Added _OSI(Module Device) May 13 12:43:03.826540 kernel: ACPI: Added _OSI(Processor Device) May 13 12:43:03.826547 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 12:43:03.826554 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 12:43:03.826561 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 12:43:03.826567 kernel: ACPI: Interpreter enabled May 13 12:43:03.826574 kernel: ACPI: Using GIC for interrupt routing May 13 12:43:03.826581 kernel: ACPI: MCFG table detected, 1 entries May 13 12:43:03.826592 kernel: ACPI: CPU0 has been hot-added May 13 12:43:03.826599 kernel: ACPI: CPU1 has been hot-added May 13 12:43:03.826606 kernel: ACPI: CPU2 has been hot-added May 13 12:43:03.826615 kernel: ACPI: CPU3 has been hot-added May 13 12:43:03.826622 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 12:43:03.826629 kernel: printk: legacy console [ttyAMA0] enabled May 13 12:43:03.826636 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 12:43:03.826758 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 12:43:03.826823 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 12:43:03.826880 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 12:43:03.826940 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 12:43:03.826996 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 12:43:03.827004 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 12:43:03.827012 kernel: PCI host bridge to bus 0000:00 May 13 12:43:03.827074 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 12:43:03.827129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 12:43:03.827181 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 12:43:03.827257 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 12:43:03.827333 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 13 12:43:03.827406 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 13 12:43:03.827466 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 13 12:43:03.827524 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 13 12:43:03.827582 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 13 12:43:03.827652 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 13 12:43:03.827717 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 13 12:43:03.827776 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 13 12:43:03.827828 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 12:43:03.827879 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 12:43:03.827931 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 12:43:03.827940 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 12:43:03.827947 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 12:43:03.827956 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 12:43:03.827963 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 12:43:03.827970 kernel: iommu: Default domain type: Translated May 13 12:43:03.827976 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 12:43:03.827983 kernel: efivars: Registered efivars operations May 13 12:43:03.827990 kernel: vgaarb: loaded May 13 12:43:03.827997 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 12:43:03.828004 kernel: VFS: Disk quotas dquot_6.6.0 May 13 12:43:03.828011 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 12:43:03.828019 kernel: pnp: PnP ACPI init May 13 12:43:03.828083 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 12:43:03.828092 kernel: pnp: PnP ACPI: found 1 devices May 13 12:43:03.828099 kernel: NET: Registered PF_INET protocol family May 13 12:43:03.828106 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 12:43:03.828113 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 12:43:03.828120 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 12:43:03.828127 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 12:43:03.828136 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 12:43:03.828143 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 12:43:03.828149 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:43:03.828156 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 12:43:03.828163 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 12:43:03.828170 kernel: PCI: CLS 0 bytes, default 64 May 13 12:43:03.828177 kernel: kvm [1]: HYP mode not available May 13 12:43:03.828183 kernel: Initialise system trusted keyrings May 13 12:43:03.828223 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 12:43:03.828234 kernel: Key type asymmetric registered May 13 12:43:03.828241 kernel: Asymmetric key parser 'x509' registered May 13 12:43:03.828248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 13 12:43:03.828255 kernel: io scheduler mq-deadline registered May 13 12:43:03.828262 kernel: io scheduler kyber registered May 13 12:43:03.828269 kernel: io scheduler bfq registered May 13 12:43:03.828275 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 12:43:03.828282 kernel: ACPI: button: Power Button [PWRB] May 13 12:43:03.828290 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 12:43:03.828363 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 12:43:03.828373 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 12:43:03.828380 kernel: thunder_xcv, ver 1.0 May 13 12:43:03.828387 kernel: thunder_bgx, ver 1.0 May 13 12:43:03.828394 kernel: nicpf, ver 1.0 May 13 12:43:03.828400 kernel: nicvf, ver 1.0 May 13 12:43:03.828468 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 12:43:03.828523 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T12:43:03 UTC (1747140183) May 13 12:43:03.828534 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 12:43:03.828541 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 13 12:43:03.828548 kernel: watchdog: NMI not fully supported May 13 12:43:03.828555 kernel: watchdog: Hard watchdog permanently disabled May 13 12:43:03.828562 kernel: NET: Registered PF_INET6 protocol family May 13 12:43:03.828568 kernel: Segment Routing with IPv6 May 13 12:43:03.828575 kernel: In-situ OAM (IOAM) with IPv6 May 13 12:43:03.828582 kernel: NET: Registered PF_PACKET protocol family May 13 12:43:03.828597 kernel: Key type dns_resolver registered May 13 12:43:03.828604 kernel: registered taskstats version 1 May 13 12:43:03.828613 kernel: Loading compiled-in X.509 certificates May 13 12:43:03.828620 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.28-flatcar: f8df872077a0531ef71a44c67653908e8a70c520' May 13 12:43:03.828627 kernel: Demotion targets for Node 0: null May 13 12:43:03.828634 kernel: Key type .fscrypt registered May 13 12:43:03.828641 kernel: Key type fscrypt-provisioning registered May 13 12:43:03.828647 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 12:43:03.828654 kernel: ima: Allocated hash algorithm: sha1 May 13 12:43:03.828661 kernel: ima: No architecture policies found May 13 12:43:03.828669 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 12:43:03.828676 kernel: clk: Disabling unused clocks May 13 12:43:03.828683 kernel: PM: genpd: Disabling unused power domains May 13 12:43:03.828689 kernel: Warning: unable to open an initial console. May 13 12:43:03.828696 kernel: Freeing unused kernel memory: 39488K May 13 12:43:03.828703 kernel: Run /init as init process May 13 12:43:03.828710 kernel: with arguments: May 13 12:43:03.828716 kernel: /init May 13 12:43:03.828723 kernel: with environment: May 13 12:43:03.828731 kernel: HOME=/ May 13 12:43:03.828738 kernel: TERM=linux May 13 12:43:03.828744 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 12:43:03.828752 systemd[1]: Successfully made /usr/ read-only. May 13 12:43:03.828761 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:43:03.828769 systemd[1]: Detected virtualization kvm. May 13 12:43:03.828776 systemd[1]: Detected architecture arm64. May 13 12:43:03.828783 systemd[1]: Running in initrd. May 13 12:43:03.828792 systemd[1]: No hostname configured, using default hostname. May 13 12:43:03.828799 systemd[1]: Hostname set to . May 13 12:43:03.828806 systemd[1]: Initializing machine ID from VM UUID. May 13 12:43:03.828814 systemd[1]: Queued start job for default target initrd.target. May 13 12:43:03.828821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:43:03.828828 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:43:03.828836 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 12:43:03.828844 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:43:03.828853 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 12:43:03.828861 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 12:43:03.828869 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 12:43:03.828877 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 12:43:03.828884 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:43:03.828891 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:43:03.828898 systemd[1]: Reached target paths.target - Path Units. May 13 12:43:03.828907 systemd[1]: Reached target slices.target - Slice Units. May 13 12:43:03.828914 systemd[1]: Reached target swap.target - Swaps. May 13 12:43:03.828921 systemd[1]: Reached target timers.target - Timer Units. May 13 12:43:03.828929 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:43:03.828936 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:43:03.828944 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 12:43:03.828951 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 12:43:03.828958 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:43:03.828967 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:43:03.828974 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:43:03.828982 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:43:03.828989 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 12:43:03.828996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:43:03.829004 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 12:43:03.829012 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 13 12:43:03.829019 systemd[1]: Starting systemd-fsck-usr.service... May 13 12:43:03.829026 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:43:03.829035 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:43:03.829042 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:43:03.829050 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 12:43:03.829058 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:43:03.829066 systemd[1]: Finished systemd-fsck-usr.service. May 13 12:43:03.829074 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 12:43:03.829098 systemd-journald[242]: Collecting audit messages is disabled. May 13 12:43:03.829117 systemd-journald[242]: Journal started May 13 12:43:03.829136 systemd-journald[242]: Runtime Journal (/run/log/journal/60e33e8e62f248f2af226d0dd55e6a76) is 6M, max 48.5M, 42.4M free. May 13 12:43:03.820519 systemd-modules-load[244]: Inserted module 'overlay' May 13 12:43:03.835293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:43:03.838211 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 12:43:03.838234 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:43:03.840605 systemd-modules-load[244]: Inserted module 'br_netfilter' May 13 12:43:03.841486 kernel: Bridge firewalling registered May 13 12:43:03.841907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 12:43:03.843546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:43:03.845370 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:43:03.858345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 12:43:03.862305 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:43:03.863720 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:43:03.863952 systemd-tmpfiles[261]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 13 12:43:03.867025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:43:03.876281 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:43:03.877533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:43:03.879536 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:43:03.882285 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 12:43:03.884395 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:43:03.904081 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b20e935bbd8772a1b0c6883755acb6e2a52b7a903a0b8e12c8ff59ca86b84928 May 13 12:43:03.919237 systemd-resolved[290]: Positive Trust Anchors: May 13 12:43:03.919254 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:43:03.919286 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:43:03.923967 systemd-resolved[290]: Defaulting to hostname 'linux'. May 13 12:43:03.924876 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:43:03.928897 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:43:03.979224 kernel: SCSI subsystem initialized May 13 12:43:03.984213 kernel: Loading iSCSI transport class v2.0-870. May 13 12:43:03.991214 kernel: iscsi: registered transport (tcp) May 13 12:43:04.004225 kernel: iscsi: registered transport (qla4xxx) May 13 12:43:04.004261 kernel: QLogic iSCSI HBA Driver May 13 12:43:04.019685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:43:04.034115 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:43:04.035900 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:43:04.078849 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 12:43:04.080998 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 12:43:04.138227 kernel: raid6: neonx8 gen() 15656 MB/s May 13 12:43:04.156205 kernel: raid6: neonx4 gen() 15430 MB/s May 13 12:43:04.173217 kernel: raid6: neonx2 gen() 11661 MB/s May 13 12:43:04.190218 kernel: raid6: neonx1 gen() 8188 MB/s May 13 12:43:04.207215 kernel: raid6: int64x8 gen() 6871 MB/s May 13 12:43:04.224214 kernel: raid6: int64x4 gen() 7328 MB/s May 13 12:43:04.241215 kernel: raid6: int64x2 gen() 6096 MB/s May 13 12:43:04.258370 kernel: raid6: int64x1 gen() 5041 MB/s May 13 12:43:04.258392 kernel: raid6: using algorithm neonx8 gen() 15656 MB/s May 13 12:43:04.276349 kernel: raid6: .... xor() 12030 MB/s, rmw enabled May 13 12:43:04.276370 kernel: raid6: using neon recovery algorithm May 13 12:43:04.283215 kernel: xor: measuring software checksum speed May 13 12:43:04.283250 kernel: 8regs : 19303 MB/sec May 13 12:43:04.284391 kernel: 32regs : 21681 MB/sec May 13 12:43:04.285656 kernel: arm64_neon : 27908 MB/sec May 13 12:43:04.285667 kernel: xor: using function: arm64_neon (27908 MB/sec) May 13 12:43:04.340223 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 12:43:04.346758 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 12:43:04.349328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:43:04.377148 systemd-udevd[501]: Using default interface naming scheme 'v255'. May 13 12:43:04.381439 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:43:04.383810 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 12:43:04.406277 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation May 13 12:43:04.428005 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:43:04.431364 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:43:04.480891 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:43:04.483559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 12:43:04.524942 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 12:43:04.527069 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 12:43:04.532165 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:43:04.532296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:43:04.535377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:43:04.537240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:43:04.541560 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 12:43:04.541597 kernel: GPT:9289727 != 19775487 May 13 12:43:04.542801 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 12:43:04.542823 kernel: GPT:9289727 != 19775487 May 13 12:43:04.543803 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 12:43:04.543827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:43:04.566349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:43:04.574114 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 12:43:04.576263 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 12:43:04.588780 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 12:43:04.594924 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 12:43:04.596023 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 12:43:04.604449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:43:04.605528 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:43:04.607395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:43:04.609294 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:43:04.611848 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 12:43:04.613527 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 12:43:04.628597 disk-uuid[593]: Primary Header is updated. May 13 12:43:04.628597 disk-uuid[593]: Secondary Entries is updated. May 13 12:43:04.628597 disk-uuid[593]: Secondary Header is updated. May 13 12:43:04.632954 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 12:43:04.635660 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:43:05.644218 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 12:43:05.644520 disk-uuid[598]: The operation has completed successfully. May 13 12:43:05.669329 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 12:43:05.669420 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 12:43:05.693719 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 12:43:05.719949 sh[612]: Success May 13 12:43:05.735012 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 12:43:05.736718 kernel: device-mapper: uevent: version 1.0.3 May 13 12:43:05.736739 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 13 12:43:05.749233 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 13 12:43:05.772252 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 12:43:05.774873 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 12:43:05.789932 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 12:43:05.797345 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 13 12:43:05.797383 kernel: BTRFS: device fsid 5ded7f9d-c045-4eec-a161-ff9af5b01d28 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (624) May 13 12:43:05.798752 kernel: BTRFS info (device dm-0): first mount of filesystem 5ded7f9d-c045-4eec-a161-ff9af5b01d28 May 13 12:43:05.798777 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 12:43:05.800306 kernel: BTRFS info (device dm-0): using free-space-tree May 13 12:43:05.803709 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 12:43:05.804912 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 13 12:43:05.806306 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 12:43:05.806976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 12:43:05.808544 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 12:43:05.824229 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (655) May 13 12:43:05.826491 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:43:05.826522 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:43:05.826532 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:43:05.834231 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:43:05.834475 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 12:43:05.836689 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 12:43:05.893703 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:43:05.899319 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:43:05.940178 systemd-networkd[798]: lo: Link UP May 13 12:43:05.940203 systemd-networkd[798]: lo: Gained carrier May 13 12:43:05.940904 systemd-networkd[798]: Enumeration completed May 13 12:43:05.941008 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:43:05.941307 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:43:05.941310 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:43:05.942150 systemd-networkd[798]: eth0: Link UP May 13 12:43:05.942153 systemd-networkd[798]: eth0: Gained carrier May 13 12:43:05.942162 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:43:05.942589 systemd[1]: Reached target network.target - Network. May 13 12:43:05.966181 ignition[706]: Ignition 2.21.0 May 13 12:43:05.966206 ignition[706]: Stage: fetch-offline May 13 12:43:05.966238 ignition[706]: no configs at "/usr/lib/ignition/base.d" May 13 12:43:05.966246 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:43:05.966419 ignition[706]: parsed url from cmdline: "" May 13 12:43:05.969231 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:43:05.966422 ignition[706]: no config URL provided May 13 12:43:05.966426 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" May 13 12:43:05.966432 ignition[706]: no config at "/usr/lib/ignition/user.ign" May 13 12:43:05.966449 ignition[706]: op(1): [started] loading QEMU firmware config module May 13 12:43:05.966453 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 12:43:05.974867 ignition[706]: op(1): [finished] loading QEMU firmware config module May 13 12:43:06.011833 ignition[706]: parsing config with SHA512: 90eb1cc9ff03316eb2e114aefe7f4bd15469835c874fbf3acf8691b491306f0a2453e05c65da69006cf633865983eeb89187a7c0a0a45275473d84b131200c27 May 13 12:43:06.017227 unknown[706]: fetched base config from "system" May 13 12:43:06.017239 unknown[706]: fetched user config from "qemu" May 13 12:43:06.017693 ignition[706]: fetch-offline: fetch-offline passed May 13 12:43:06.019935 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:43:06.017748 ignition[706]: Ignition finished successfully May 13 12:43:06.021179 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 12:43:06.021909 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 12:43:06.052495 ignition[812]: Ignition 2.21.0 May 13 12:43:06.052504 ignition[812]: Stage: kargs May 13 12:43:06.052652 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 13 12:43:06.052660 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:43:06.055288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 12:43:06.053440 ignition[812]: kargs: kargs passed May 13 12:43:06.057589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 12:43:06.053481 ignition[812]: Ignition finished successfully May 13 12:43:06.076879 ignition[820]: Ignition 2.21.0 May 13 12:43:06.076895 ignition[820]: Stage: disks May 13 12:43:06.077023 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 13 12:43:06.077032 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:43:06.078482 ignition[820]: disks: disks passed May 13 12:43:06.080109 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 12:43:06.078544 ignition[820]: Ignition finished successfully May 13 12:43:06.081522 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 12:43:06.082867 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 12:43:06.084752 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:43:06.086280 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:43:06.088084 systemd[1]: Reached target basic.target - Basic System. May 13 12:43:06.090686 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 12:43:06.111151 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 13 12:43:06.114725 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 12:43:06.117333 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 12:43:06.181219 kernel: EXT4-fs (vda9): mounted filesystem 02660b30-6941-48da-9f0e-501a024e2c48 r/w with ordered data mode. Quota mode: none. May 13 12:43:06.181508 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 12:43:06.182678 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 12:43:06.185745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:43:06.188255 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 12:43:06.190115 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 12:43:06.191861 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 12:43:06.191887 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:43:06.199555 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 12:43:06.201853 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 12:43:06.206905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (839) May 13 12:43:06.206926 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:43:06.206935 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:43:06.206944 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:43:06.210359 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:43:06.249498 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory May 13 12:43:06.253133 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory May 13 12:43:06.257106 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory May 13 12:43:06.260797 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory May 13 12:43:06.334417 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 12:43:06.336679 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 12:43:06.338257 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 12:43:06.356214 kernel: BTRFS info (device vda6): last unmount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:43:06.372299 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 12:43:06.384953 ignition[953]: INFO : Ignition 2.21.0 May 13 12:43:06.384953 ignition[953]: INFO : Stage: mount May 13 12:43:06.386538 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:43:06.386538 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:43:06.389453 ignition[953]: INFO : mount: mount passed May 13 12:43:06.389453 ignition[953]: INFO : Ignition finished successfully May 13 12:43:06.390249 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 12:43:06.392930 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 12:43:06.914096 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 12:43:06.915608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 12:43:06.933047 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (965) May 13 12:43:06.933078 kernel: BTRFS info (device vda6): first mount of filesystem 79dad06b-b9d3-4cc5-b052-ebf459e9d4d7 May 13 12:43:06.934041 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 12:43:06.934068 kernel: BTRFS info (device vda6): using free-space-tree May 13 12:43:06.938263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 12:43:06.965414 ignition[982]: INFO : Ignition 2.21.0 May 13 12:43:06.965414 ignition[982]: INFO : Stage: files May 13 12:43:06.967969 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:43:06.967969 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:43:06.967969 ignition[982]: DEBUG : files: compiled without relabeling support, skipping May 13 12:43:06.967969 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 12:43:06.967969 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 12:43:06.973965 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 12:43:06.973965 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 12:43:06.973965 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 12:43:06.973965 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 12:43:06.973965 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 12:43:06.970009 unknown[982]: wrote ssh authorized keys file for user: core May 13 12:43:07.009030 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 12:43:07.119039 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 12:43:07.121172 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:43:07.121172 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 12:43:07.426336 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 12:43:07.440826 systemd-networkd[798]: eth0: Gained IPv6LL May 13 12:43:07.477468 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:43:07.479282 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 12:43:07.493436 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:43:07.493436 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 12:43:07.493436 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 12:43:07.493436 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 12:43:07.493436 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 12:43:07.493436 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 12:43:07.706593 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 12:43:07.884522 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 12:43:07.884522 ignition[982]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 12:43:07.888238 ignition[982]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 12:43:07.904442 ignition[982]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:43:07.907076 ignition[982]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 12:43:07.909333 ignition[982]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 12:43:07.909333 ignition[982]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 12:43:07.909333 ignition[982]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 12:43:07.909333 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 12:43:07.909333 ignition[982]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 12:43:07.909333 ignition[982]: INFO : files: files passed May 13 12:43:07.909333 ignition[982]: INFO : Ignition finished successfully May 13 12:43:07.911101 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 12:43:07.913305 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 12:43:07.916321 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 12:43:07.927987 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 12:43:07.930172 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory May 13 12:43:07.928064 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 12:43:07.933813 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:43:07.933813 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 12:43:07.937416 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 12:43:07.937740 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:43:07.940220 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 12:43:07.942827 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 12:43:07.969594 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 12:43:07.969707 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 12:43:07.971942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 12:43:07.973794 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 12:43:07.975582 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 12:43:07.976359 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 12:43:08.013929 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:43:08.016178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 12:43:08.033349 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 12:43:08.034528 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:43:08.036538 systemd[1]: Stopped target timers.target - Timer Units. May 13 12:43:08.038290 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 12:43:08.038396 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 12:43:08.040853 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 12:43:08.041921 systemd[1]: Stopped target basic.target - Basic System. May 13 12:43:08.043715 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 12:43:08.045467 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 12:43:08.047219 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 12:43:08.049138 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 13 12:43:08.051113 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 12:43:08.052976 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 12:43:08.054995 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 12:43:08.056799 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 12:43:08.058898 systemd[1]: Stopped target swap.target - Swaps. May 13 12:43:08.060433 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 12:43:08.060538 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 12:43:08.062832 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 12:43:08.063961 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:43:08.065793 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 12:43:08.065898 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:43:08.067775 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 12:43:08.067874 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 12:43:08.070577 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 12:43:08.070692 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 12:43:08.072587 systemd[1]: Stopped target paths.target - Path Units. May 13 12:43:08.074411 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 12:43:08.074520 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:43:08.076378 systemd[1]: Stopped target slices.target - Slice Units. May 13 12:43:08.078161 systemd[1]: Stopped target sockets.target - Socket Units. May 13 12:43:08.079907 systemd[1]: iscsid.socket: Deactivated successfully. May 13 12:43:08.079981 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 12:43:08.081770 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 12:43:08.081845 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 12:43:08.083378 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 12:43:08.083484 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 12:43:08.085360 systemd[1]: ignition-files.service: Deactivated successfully. May 13 12:43:08.085455 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 12:43:08.088158 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 12:43:08.089807 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 12:43:08.089940 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:43:08.106749 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 12:43:08.107610 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 12:43:08.107742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:43:08.109664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 12:43:08.109757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 12:43:08.115874 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 12:43:08.117393 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 12:43:08.119476 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 12:43:08.121196 ignition[1037]: INFO : Ignition 2.21.0 May 13 12:43:08.121196 ignition[1037]: INFO : Stage: umount May 13 12:43:08.122831 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 12:43:08.122831 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 12:43:08.122831 ignition[1037]: INFO : umount: umount passed May 13 12:43:08.122831 ignition[1037]: INFO : Ignition finished successfully May 13 12:43:08.124095 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 12:43:08.124176 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 12:43:08.126942 systemd[1]: Stopped target network.target - Network. May 13 12:43:08.128238 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 12:43:08.128298 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 12:43:08.129896 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 12:43:08.129938 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 12:43:08.131605 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 12:43:08.131652 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 12:43:08.133265 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 12:43:08.133307 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 12:43:08.135023 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 12:43:08.136694 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 12:43:08.142920 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 12:43:08.143054 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 12:43:08.146050 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 12:43:08.146283 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 12:43:08.146321 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:43:08.149968 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 12:43:08.154341 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 12:43:08.155284 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 12:43:08.158516 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 12:43:08.158705 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 13 12:43:08.160742 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 12:43:08.160775 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 12:43:08.163315 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 12:43:08.164222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 12:43:08.164283 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 12:43:08.166366 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:43:08.166416 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:43:08.169268 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 12:43:08.169310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 12:43:08.171486 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:43:08.175017 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 12:43:08.182455 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 12:43:08.182595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 12:43:08.190629 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 12:43:08.190748 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 12:43:08.192624 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 12:43:08.192673 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 12:43:08.194775 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 12:43:08.194912 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:43:08.196329 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 12:43:08.196366 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 12:43:08.198044 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 12:43:08.198073 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:43:08.199833 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 12:43:08.199874 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 12:43:08.202734 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 12:43:08.202776 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 12:43:08.205364 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 12:43:08.205407 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 12:43:08.208162 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 12:43:08.209309 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 13 12:43:08.209361 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:43:08.212079 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 12:43:08.212119 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:43:08.214996 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 12:43:08.215035 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:43:08.225945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 12:43:08.226043 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 12:43:08.228260 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 12:43:08.230647 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 12:43:08.250446 systemd[1]: Switching root. May 13 12:43:08.286221 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). May 13 12:43:08.286267 systemd-journald[242]: Journal stopped May 13 12:43:09.050808 kernel: SELinux: policy capability network_peer_controls=1 May 13 12:43:09.050856 kernel: SELinux: policy capability open_perms=1 May 13 12:43:09.050866 kernel: SELinux: policy capability extended_socket_class=1 May 13 12:43:09.050878 kernel: SELinux: policy capability always_check_network=0 May 13 12:43:09.050892 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 12:43:09.050901 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 12:43:09.050910 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 12:43:09.050919 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 12:43:09.050929 kernel: SELinux: policy capability userspace_initial_context=0 May 13 12:43:09.050942 kernel: audit: type=1403 audit(1747140188.447:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 12:43:09.050957 systemd[1]: Successfully loaded SELinux policy in 31.616ms. May 13 12:43:09.050972 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.297ms. May 13 12:43:09.050983 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 12:43:09.050993 systemd[1]: Detected virtualization kvm. May 13 12:43:09.051004 systemd[1]: Detected architecture arm64. May 13 12:43:09.051014 systemd[1]: Detected first boot. May 13 12:43:09.051024 systemd[1]: Initializing machine ID from VM UUID. May 13 12:43:09.051034 zram_generator::config[1083]: No configuration found. May 13 12:43:09.051045 kernel: NET: Registered PF_VSOCK protocol family May 13 12:43:09.051054 systemd[1]: Populated /etc with preset unit settings. May 13 12:43:09.051065 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 12:43:09.051074 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 12:43:09.051084 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 12:43:09.051093 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 12:43:09.051103 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 12:43:09.051115 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 12:43:09.051126 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 12:43:09.051138 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 12:43:09.051148 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 12:43:09.051158 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 12:43:09.051168 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 12:43:09.051178 systemd[1]: Created slice user.slice - User and Session Slice. May 13 12:43:09.051204 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 12:43:09.051217 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 12:43:09.051229 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 12:43:09.051239 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 12:43:09.051249 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 12:43:09.051259 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 12:43:09.051269 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 12:43:09.051279 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 12:43:09.051288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 12:43:09.051298 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 12:43:09.051309 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 12:43:09.051319 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 12:43:09.051329 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 12:43:09.051338 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 12:43:09.051348 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 12:43:09.051358 systemd[1]: Reached target slices.target - Slice Units. May 13 12:43:09.051367 systemd[1]: Reached target swap.target - Swaps. May 13 12:43:09.051377 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 12:43:09.051386 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 12:43:09.051397 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 12:43:09.051407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 12:43:09.051417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 12:43:09.051427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 12:43:09.051437 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 12:43:09.051447 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 12:43:09.051456 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 12:43:09.051466 systemd[1]: Mounting media.mount - External Media Directory... May 13 12:43:09.051476 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 12:43:09.051487 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 12:43:09.051497 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 12:43:09.051507 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 12:43:09.051520 systemd[1]: Reached target machines.target - Containers. May 13 12:43:09.051530 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 12:43:09.051540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:43:09.051564 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 12:43:09.051580 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 12:43:09.051595 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:43:09.051604 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:43:09.051614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:43:09.051624 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 12:43:09.051634 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:43:09.051643 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 12:43:09.051654 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 12:43:09.051663 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 12:43:09.051673 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 12:43:09.051683 systemd[1]: Stopped systemd-fsck-usr.service. May 13 12:43:09.051693 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:43:09.051703 kernel: fuse: init (API version 7.41) May 13 12:43:09.051712 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 12:43:09.051722 kernel: loop: module loaded May 13 12:43:09.051732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 12:43:09.051741 kernel: ACPI: bus type drm_connector registered May 13 12:43:09.051750 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 12:43:09.051760 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 12:43:09.051771 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 12:43:09.051781 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 12:43:09.051791 systemd[1]: verity-setup.service: Deactivated successfully. May 13 12:43:09.051800 systemd[1]: Stopped verity-setup.service. May 13 12:43:09.051810 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 12:43:09.051841 systemd-journald[1151]: Collecting audit messages is disabled. May 13 12:43:09.051863 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 12:43:09.051892 systemd-journald[1151]: Journal started May 13 12:43:09.051911 systemd-journald[1151]: Runtime Journal (/run/log/journal/60e33e8e62f248f2af226d0dd55e6a76) is 6M, max 48.5M, 42.4M free. May 13 12:43:08.820636 systemd[1]: Queued start job for default target multi-user.target. May 13 12:43:08.843101 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 12:43:08.843478 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 12:43:09.054918 systemd[1]: Started systemd-journald.service - Journal Service. May 13 12:43:09.055547 systemd[1]: Mounted media.mount - External Media Directory. May 13 12:43:09.056626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 12:43:09.057806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 12:43:09.059061 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 12:43:09.060325 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 12:43:09.061728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 12:43:09.063252 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 12:43:09.063415 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 12:43:09.065671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:43:09.065841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:43:09.067184 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:43:09.067371 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:43:09.068665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:43:09.068823 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:43:09.070398 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 12:43:09.070577 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 12:43:09.072019 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:43:09.072175 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:43:09.073534 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 12:43:09.074945 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 12:43:09.076491 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 12:43:09.078046 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 12:43:09.090048 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 12:43:09.092431 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 12:43:09.094403 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 12:43:09.095595 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 12:43:09.095631 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 12:43:09.097518 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 12:43:09.106926 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 12:43:09.108294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:43:09.109530 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 12:43:09.111734 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 12:43:09.112984 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:43:09.114305 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 12:43:09.117294 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:43:09.118136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:43:09.118385 systemd-journald[1151]: Time spent on flushing to /var/log/journal/60e33e8e62f248f2af226d0dd55e6a76 is 13.148ms for 885 entries. May 13 12:43:09.118385 systemd-journald[1151]: System Journal (/var/log/journal/60e33e8e62f248f2af226d0dd55e6a76) is 8M, max 195.6M, 187.6M free. May 13 12:43:09.140845 systemd-journald[1151]: Received client request to flush runtime journal. May 13 12:43:09.140876 kernel: loop0: detected capacity change from 0 to 138376 May 13 12:43:09.121377 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 12:43:09.125624 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 12:43:09.130223 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 12:43:09.132421 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 12:43:09.134589 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 12:43:09.136038 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 12:43:09.138463 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 12:43:09.142877 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 12:43:09.147232 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 12:43:09.155276 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 12:43:09.161878 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 12:43:09.170110 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 12:43:09.173255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:43:09.174947 kernel: loop1: detected capacity change from 0 to 107312 May 13 12:43:09.177402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 12:43:09.200165 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 13 12:43:09.200183 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. May 13 12:43:09.205147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 12:43:09.207250 kernel: loop2: detected capacity change from 0 to 189592 May 13 12:43:09.236237 kernel: loop3: detected capacity change from 0 to 138376 May 13 12:43:09.244245 kernel: loop4: detected capacity change from 0 to 107312 May 13 12:43:09.250225 kernel: loop5: detected capacity change from 0 to 189592 May 13 12:43:09.254717 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 12:43:09.255072 (sd-merge)[1225]: Merged extensions into '/usr'. May 13 12:43:09.259276 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... May 13 12:43:09.259312 systemd[1]: Reloading... May 13 12:43:09.321238 zram_generator::config[1250]: No configuration found. May 13 12:43:09.383356 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 12:43:09.400105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:43:09.462628 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 12:43:09.462963 systemd[1]: Reloading finished in 202 ms. May 13 12:43:09.499485 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 12:43:09.501034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 12:43:09.518509 systemd[1]: Starting ensure-sysext.service... May 13 12:43:09.522339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 12:43:09.533097 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... May 13 12:43:09.533114 systemd[1]: Reloading... May 13 12:43:09.539318 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 13 12:43:09.539350 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 13 12:43:09.539597 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 12:43:09.539783 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 12:43:09.540394 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 12:43:09.540625 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 13 12:43:09.540672 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. May 13 12:43:09.543343 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:43:09.543355 systemd-tmpfiles[1286]: Skipping /boot May 13 12:43:09.551898 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. May 13 12:43:09.551912 systemd-tmpfiles[1286]: Skipping /boot May 13 12:43:09.580228 zram_generator::config[1313]: No configuration found. May 13 12:43:09.649672 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:43:09.711397 systemd[1]: Reloading finished in 177 ms. May 13 12:43:09.729520 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 12:43:09.731058 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 12:43:09.757160 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:43:09.759411 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 12:43:09.769869 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 12:43:09.774260 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 12:43:09.781733 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 12:43:09.784354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 12:43:09.792902 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 12:43:09.795824 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 12:43:09.799558 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:43:09.806336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:43:09.817999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:43:09.822269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:43:09.823371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:43:09.823491 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:43:09.824461 systemd-udevd[1354]: Using default interface naming scheme 'v255'. May 13 12:43:09.824973 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 12:43:09.832347 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 12:43:09.836527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:43:09.836701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:43:09.841131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:43:09.841315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:43:09.843161 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:43:09.843310 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:43:09.847381 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 12:43:09.854440 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 12:43:09.856322 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 12:43:09.861795 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 12:43:09.876676 augenrules[1418]: No rules May 13 12:43:09.879693 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:43:09.879923 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:43:09.888923 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:43:09.890131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 12:43:09.891122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 12:43:09.894234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 12:43:09.896486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 12:43:09.905998 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 12:43:09.907114 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 12:43:09.907366 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 12:43:09.908835 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 12:43:09.910229 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 12:43:09.917209 systemd[1]: Finished ensure-sysext.service. May 13 12:43:09.918325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 12:43:09.918469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 12:43:09.920228 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 12:43:09.920392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 12:43:09.922636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 12:43:09.922772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 12:43:09.934817 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 12:43:09.934877 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 12:43:09.937589 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 12:43:09.939112 augenrules[1424]: /sbin/augenrules: No change May 13 12:43:09.957381 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 12:43:09.959253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 12:43:09.960591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 12:43:09.968428 augenrules[1455]: No rules May 13 12:43:09.969122 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:43:09.972168 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:43:09.992953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 12:43:09.996823 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 12:43:10.020906 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 12:43:10.022675 systemd-resolved[1352]: Positive Trust Anchors: May 13 12:43:10.022692 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 12:43:10.022723 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 12:43:10.031296 systemd-resolved[1352]: Defaulting to hostname 'linux'. May 13 12:43:10.035294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 12:43:10.036498 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 12:43:10.039693 systemd-networkd[1429]: lo: Link UP May 13 12:43:10.039701 systemd-networkd[1429]: lo: Gained carrier May 13 12:43:10.040566 systemd-networkd[1429]: Enumeration completed May 13 12:43:10.040666 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 12:43:10.041015 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:43:10.041019 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 12:43:10.041443 systemd-networkd[1429]: eth0: Link UP May 13 12:43:10.041560 systemd-networkd[1429]: eth0: Gained carrier May 13 12:43:10.041577 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 12:43:10.041866 systemd[1]: Reached target network.target - Network. May 13 12:43:10.044145 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 12:43:10.046321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 12:43:10.047501 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 12:43:10.048856 systemd[1]: Reached target sysinit.target - System Initialization. May 13 12:43:10.050055 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 12:43:10.051560 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 12:43:10.052900 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 12:43:10.054215 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 12:43:10.054245 systemd[1]: Reached target paths.target - Path Units. May 13 12:43:10.055283 systemd[1]: Reached target time-set.target - System Time Set. May 13 12:43:10.055292 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 12:43:10.061037 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 12:43:10.062268 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 12:43:10.063513 systemd[1]: Reached target timers.target - Timer Units. May 13 12:43:10.065484 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 12:43:10.067806 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 12:43:10.070527 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 12:43:10.071302 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. May 13 12:43:10.071939 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 12:43:10.073197 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 12:43:10.500150 systemd-resolved[1352]: Clock change detected. Flushing caches. May 13 12:43:10.500190 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 12:43:10.501086 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 12:43:10.501977 systemd-timesyncd[1446]: Initial clock synchronization to Tue 2025-05-13 12:43:10.500088 UTC. May 13 12:43:10.502547 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 12:43:10.504626 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 12:43:10.506003 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 12:43:10.507800 systemd[1]: Reached target sockets.target - Socket Units. May 13 12:43:10.508815 systemd[1]: Reached target basic.target - Basic System. May 13 12:43:10.509860 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 12:43:10.509891 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 12:43:10.511315 systemd[1]: Starting containerd.service - containerd container runtime... May 13 12:43:10.513340 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 12:43:10.515346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 12:43:10.519343 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 12:43:10.528060 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 12:43:10.529134 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 12:43:10.530171 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 12:43:10.535241 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 12:43:10.536089 jq[1492]: false May 13 12:43:10.540508 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 12:43:10.542587 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 12:43:10.546608 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 12:43:10.548497 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 12:43:10.548891 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 12:43:10.553707 systemd[1]: Starting update-engine.service - Update Engine... May 13 12:43:10.555985 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 12:43:10.561602 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 12:43:10.563383 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 12:43:10.563585 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 12:43:10.575553 systemd[1]: motdgen.service: Deactivated successfully. May 13 12:43:10.575840 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 12:43:10.580260 jq[1508]: true May 13 12:43:10.578081 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 12:43:10.578294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 12:43:10.588476 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 12:43:10.601268 jq[1513]: true May 13 12:43:10.614985 extend-filesystems[1493]: Found loop3 May 13 12:43:10.614985 extend-filesystems[1493]: Found loop4 May 13 12:43:10.614985 extend-filesystems[1493]: Found loop5 May 13 12:43:10.614985 extend-filesystems[1493]: Found vda May 13 12:43:10.614985 extend-filesystems[1493]: Found vda1 May 13 12:43:10.614985 extend-filesystems[1493]: Found vda2 May 13 12:43:10.614985 extend-filesystems[1493]: Found vda3 May 13 12:43:10.614985 extend-filesystems[1493]: Found usr May 13 12:43:10.614985 extend-filesystems[1493]: Found vda4 May 13 12:43:10.614985 extend-filesystems[1493]: Found vda6 May 13 12:43:10.614985 extend-filesystems[1493]: Found vda7 May 13 12:43:10.614985 extend-filesystems[1493]: Found vda9 May 13 12:43:10.614985 extend-filesystems[1493]: Checking size of /dev/vda9 May 13 12:43:10.612413 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 12:43:10.639990 update_engine[1502]: I20250513 12:43:10.637416 1502 main.cc:92] Flatcar Update Engine starting May 13 12:43:10.640372 tar[1511]: linux-arm64/helm May 13 12:43:10.641303 extend-filesystems[1493]: Resized partition /dev/vda9 May 13 12:43:10.648102 extend-filesystems[1536]: resize2fs 1.47.2 (1-Jan-2025) May 13 12:43:10.655150 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 12:43:10.660036 dbus-daemon[1490]: [system] SELinux support is enabled May 13 12:43:10.660462 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 12:43:10.663243 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 12:43:10.663301 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 12:43:10.664867 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 12:43:10.664891 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 12:43:10.667570 update_engine[1502]: I20250513 12:43:10.667515 1502 update_check_scheduler.cc:74] Next update check in 8m33s May 13 12:43:10.668458 systemd[1]: Started update-engine.service - Update Engine. May 13 12:43:10.681267 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 12:43:10.703180 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 12:43:10.720250 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (Power Button) May 13 12:43:10.720520 systemd-logind[1500]: New seat seat0. May 13 12:43:10.721732 systemd[1]: Started systemd-logind.service - User Login Management. May 13 12:43:10.724600 bash[1546]: Updated "/home/core/.ssh/authorized_keys" May 13 12:43:10.724703 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 12:43:10.724703 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 12:43:10.724703 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 12:43:10.724900 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 12:43:10.738684 extend-filesystems[1493]: Resized filesystem in /dev/vda9 May 13 12:43:10.725107 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 12:43:10.735492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 12:43:10.737616 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 12:43:10.743966 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 12:43:10.775394 locksmithd[1548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 12:43:10.847461 containerd[1514]: time="2025-05-13T12:43:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 12:43:10.850639 containerd[1514]: time="2025-05-13T12:43:10.850037548Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 13 12:43:10.864632 containerd[1514]: time="2025-05-13T12:43:10.864283988Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.44µs" May 13 12:43:10.864632 containerd[1514]: time="2025-05-13T12:43:10.864335468Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 12:43:10.864632 containerd[1514]: time="2025-05-13T12:43:10.864356028Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 12:43:10.864986 containerd[1514]: time="2025-05-13T12:43:10.864959188Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 12:43:10.865115 containerd[1514]: time="2025-05-13T12:43:10.865097508Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 12:43:10.865209 containerd[1514]: time="2025-05-13T12:43:10.865194388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:43:10.865387 containerd[1514]: time="2025-05-13T12:43:10.865365188Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 12:43:10.865582 containerd[1514]: time="2025-05-13T12:43:10.865454148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:43:10.865923 containerd[1514]: time="2025-05-13T12:43:10.865898548Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 12:43:10.865987 containerd[1514]: time="2025-05-13T12:43:10.865973668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:43:10.866097 containerd[1514]: time="2025-05-13T12:43:10.866079348Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 12:43:10.866670 containerd[1514]: time="2025-05-13T12:43:10.866161508Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 12:43:10.866670 containerd[1514]: time="2025-05-13T12:43:10.866268148Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 12:43:10.866670 containerd[1514]: time="2025-05-13T12:43:10.866481268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:43:10.866670 containerd[1514]: time="2025-05-13T12:43:10.866521548Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 12:43:10.866670 containerd[1514]: time="2025-05-13T12:43:10.866533988Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 12:43:10.866670 containerd[1514]: time="2025-05-13T12:43:10.866575908Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 12:43:10.867644 containerd[1514]: time="2025-05-13T12:43:10.867603308Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 12:43:10.867745 containerd[1514]: time="2025-05-13T12:43:10.867722788Z" level=info msg="metadata content store policy set" policy=shared May 13 12:43:10.871463 containerd[1514]: time="2025-05-13T12:43:10.871416228Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 12:43:10.871562 containerd[1514]: time="2025-05-13T12:43:10.871484148Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 12:43:10.871562 containerd[1514]: time="2025-05-13T12:43:10.871507828Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 12:43:10.871562 containerd[1514]: time="2025-05-13T12:43:10.871535188Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 12:43:10.871562 containerd[1514]: time="2025-05-13T12:43:10.871547988Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 12:43:10.871562 containerd[1514]: time="2025-05-13T12:43:10.871558988Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871572268Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871585268Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871597548Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871608868Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871618868Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871632348Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871798028Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871822548Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 12:43:10.871783 containerd[1514]: time="2025-05-13T12:43:10.871836828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871854508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871871988Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871884228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871906188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871919348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871932108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871944148Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 12:43:10.871993 containerd[1514]: time="2025-05-13T12:43:10.871954268Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 12:43:10.872232 containerd[1514]: time="2025-05-13T12:43:10.872165668Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 12:43:10.872232 containerd[1514]: time="2025-05-13T12:43:10.872218668Z" level=info msg="Start snapshots syncer" May 13 12:43:10.872289 containerd[1514]: time="2025-05-13T12:43:10.872242468Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 12:43:10.872629 containerd[1514]: time="2025-05-13T12:43:10.872541828Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 12:43:10.872629 containerd[1514]: time="2025-05-13T12:43:10.872601708Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 12:43:10.872811 containerd[1514]: time="2025-05-13T12:43:10.872680268Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 12:43:10.872833 containerd[1514]: time="2025-05-13T12:43:10.872818628Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872843788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872854788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872864668Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872892308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872907788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872918188Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 12:43:10.872946 containerd[1514]: time="2025-05-13T12:43:10.872943508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.872956228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.872966508Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.872999628Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.873013428Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.873022308Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.873031868Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.873039908Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.873048788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 12:43:10.873121 containerd[1514]: time="2025-05-13T12:43:10.873059508Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 12:43:10.873546 containerd[1514]: time="2025-05-13T12:43:10.873159988Z" level=info msg="runtime interface created" May 13 12:43:10.873546 containerd[1514]: time="2025-05-13T12:43:10.873167068Z" level=info msg="created NRI interface" May 13 12:43:10.873546 containerd[1514]: time="2025-05-13T12:43:10.873175748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 12:43:10.873546 containerd[1514]: time="2025-05-13T12:43:10.873188748Z" level=info msg="Connect containerd service" May 13 12:43:10.873546 containerd[1514]: time="2025-05-13T12:43:10.873216148Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 12:43:10.873953 containerd[1514]: time="2025-05-13T12:43:10.873923028Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990427748Z" level=info msg="Start subscribing containerd event" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990509948Z" level=info msg="Start recovering state" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990600788Z" level=info msg="Start event monitor" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990618308Z" level=info msg="Start cni network conf syncer for default" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990625988Z" level=info msg="Start streaming server" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990634308Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990641148Z" level=info msg="runtime interface starting up..." May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990646468Z" level=info msg="starting plugins..." May 13 12:43:10.990663 containerd[1514]: time="2025-05-13T12:43:10.990658868Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 12:43:10.991120 containerd[1514]: time="2025-05-13T12:43:10.991020988Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 12:43:10.991120 containerd[1514]: time="2025-05-13T12:43:10.991080988Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 12:43:10.991275 containerd[1514]: time="2025-05-13T12:43:10.991260828Z" level=info msg="containerd successfully booted in 0.144300s" May 13 12:43:10.991458 systemd[1]: Started containerd.service - containerd container runtime. May 13 12:43:11.028054 sshd_keygen[1510]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 12:43:11.042806 tar[1511]: linux-arm64/LICENSE May 13 12:43:11.042939 tar[1511]: linux-arm64/README.md May 13 12:43:11.049249 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 12:43:11.058344 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 12:43:11.059843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 12:43:11.067261 systemd[1]: issuegen.service: Deactivated successfully. May 13 12:43:11.068264 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 12:43:11.070837 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 12:43:11.097254 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 12:43:11.100260 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 12:43:11.102623 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 12:43:11.104033 systemd[1]: Reached target getty.target - Login Prompts. May 13 12:43:12.472346 systemd-networkd[1429]: eth0: Gained IPv6LL May 13 12:43:12.474924 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 12:43:12.477849 systemd[1]: Reached target network-online.target - Network is Online. May 13 12:43:12.480329 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 12:43:12.482706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:43:12.507773 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 12:43:12.523907 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 12:43:12.525227 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 12:43:12.526770 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 12:43:12.531420 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 12:43:12.987347 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:12.988901 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 12:43:12.992764 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:43:12.995239 systemd[1]: Startup finished in 2.124s (kernel) + 4.828s (initrd) + 4.160s (userspace) = 11.114s. May 13 12:43:13.422378 kubelet[1626]: E0513 12:43:13.422263 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:43:13.424464 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:43:13.424601 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:43:13.424919 systemd[1]: kubelet.service: Consumed 774ms CPU time, 231.6M memory peak. May 13 12:43:16.446810 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 12:43:16.447970 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:55002.service - OpenSSH per-connection server daemon (10.0.0.1:55002). May 13 12:43:16.556215 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 55002 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:16.557939 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:16.564302 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 12:43:16.565331 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 12:43:16.572215 systemd-logind[1500]: New session 1 of user core. May 13 12:43:16.603029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 12:43:16.605603 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 12:43:16.628324 (systemd)[1643]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 12:43:16.630409 systemd-logind[1500]: New session c1 of user core. May 13 12:43:16.749975 systemd[1643]: Queued start job for default target default.target. May 13 12:43:16.772111 systemd[1643]: Created slice app.slice - User Application Slice. May 13 12:43:16.772170 systemd[1643]: Reached target paths.target - Paths. May 13 12:43:16.772214 systemd[1643]: Reached target timers.target - Timers. May 13 12:43:16.773506 systemd[1643]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 12:43:16.782926 systemd[1643]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 12:43:16.782987 systemd[1643]: Reached target sockets.target - Sockets. May 13 12:43:16.783028 systemd[1643]: Reached target basic.target - Basic System. May 13 12:43:16.783062 systemd[1643]: Reached target default.target - Main User Target. May 13 12:43:16.783088 systemd[1643]: Startup finished in 146ms. May 13 12:43:16.783395 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 12:43:16.784853 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 12:43:16.854268 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:55004.service - OpenSSH per-connection server daemon (10.0.0.1:55004). May 13 12:43:16.914670 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 55004 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:16.915953 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:16.921040 systemd-logind[1500]: New session 2 of user core. May 13 12:43:16.929338 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 12:43:16.981594 sshd[1656]: Connection closed by 10.0.0.1 port 55004 May 13 12:43:16.981918 sshd-session[1654]: pam_unix(sshd:session): session closed for user core May 13 12:43:16.990224 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:55004.service: Deactivated successfully. May 13 12:43:16.992736 systemd[1]: session-2.scope: Deactivated successfully. May 13 12:43:16.993436 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. May 13 12:43:16.995896 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:55008.service - OpenSSH per-connection server daemon (10.0.0.1:55008). May 13 12:43:16.996774 systemd-logind[1500]: Removed session 2. May 13 12:43:17.044004 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 55008 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:17.045349 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:17.050226 systemd-logind[1500]: New session 3 of user core. May 13 12:43:17.063322 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 12:43:17.112099 sshd[1664]: Connection closed by 10.0.0.1 port 55008 May 13 12:43:17.112442 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 13 12:43:17.124479 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:55008.service: Deactivated successfully. May 13 12:43:17.126703 systemd[1]: session-3.scope: Deactivated successfully. May 13 12:43:17.127422 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. May 13 12:43:17.129847 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:55016.service - OpenSSH per-connection server daemon (10.0.0.1:55016). May 13 12:43:17.130394 systemd-logind[1500]: Removed session 3. May 13 12:43:17.179431 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 55016 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:17.180705 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:17.185072 systemd-logind[1500]: New session 4 of user core. May 13 12:43:17.191302 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 12:43:17.243937 sshd[1672]: Connection closed by 10.0.0.1 port 55016 May 13 12:43:17.244371 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 13 12:43:17.257252 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:55016.service: Deactivated successfully. May 13 12:43:17.258749 systemd[1]: session-4.scope: Deactivated successfully. May 13 12:43:17.259498 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. May 13 12:43:17.261877 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:55024.service - OpenSSH per-connection server daemon (10.0.0.1:55024). May 13 12:43:17.262558 systemd-logind[1500]: Removed session 4. May 13 12:43:17.316075 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 55024 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:17.317590 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:17.321511 systemd-logind[1500]: New session 5 of user core. May 13 12:43:17.330317 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 12:43:17.388976 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 12:43:17.389287 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:43:17.410836 sudo[1681]: pam_unix(sudo:session): session closed for user root May 13 12:43:17.413734 sshd[1680]: Connection closed by 10.0.0.1 port 55024 May 13 12:43:17.413619 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 13 12:43:17.425356 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:55024.service: Deactivated successfully. May 13 12:43:17.427638 systemd[1]: session-5.scope: Deactivated successfully. May 13 12:43:17.428342 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. May 13 12:43:17.430635 systemd[1]: Started sshd@5-10.0.0.71:22-10.0.0.1:55032.service - OpenSSH per-connection server daemon (10.0.0.1:55032). May 13 12:43:17.431494 systemd-logind[1500]: Removed session 5. May 13 12:43:17.481993 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 55032 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:17.483354 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:17.488021 systemd-logind[1500]: New session 6 of user core. May 13 12:43:17.498317 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 12:43:17.552545 sudo[1691]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 12:43:17.552834 sudo[1691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:43:17.557721 sudo[1691]: pam_unix(sudo:session): session closed for user root May 13 12:43:17.562771 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 12:43:17.563045 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:43:17.571747 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 12:43:17.618171 augenrules[1713]: No rules May 13 12:43:17.619412 systemd[1]: audit-rules.service: Deactivated successfully. May 13 12:43:17.620267 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 12:43:17.621199 sudo[1690]: pam_unix(sudo:session): session closed for user root May 13 12:43:17.622411 sshd[1689]: Connection closed by 10.0.0.1 port 55032 May 13 12:43:17.622878 sshd-session[1687]: pam_unix(sshd:session): session closed for user core May 13 12:43:17.630315 systemd[1]: sshd@5-10.0.0.71:22-10.0.0.1:55032.service: Deactivated successfully. May 13 12:43:17.632272 systemd[1]: session-6.scope: Deactivated successfully. May 13 12:43:17.633762 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. May 13 12:43:17.635877 systemd[1]: Started sshd@6-10.0.0.71:22-10.0.0.1:55046.service - OpenSSH per-connection server daemon (10.0.0.1:55046). May 13 12:43:17.636821 systemd-logind[1500]: Removed session 6. May 13 12:43:17.685895 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 55046 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:43:17.687162 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:43:17.691714 systemd-logind[1500]: New session 7 of user core. May 13 12:43:17.698331 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 12:43:17.749269 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 12:43:17.749565 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 12:43:18.104752 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 12:43:18.122508 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 12:43:18.393865 dockerd[1748]: time="2025-05-13T12:43:18.393746748Z" level=info msg="Starting up" May 13 12:43:18.394609 dockerd[1748]: time="2025-05-13T12:43:18.394557868Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 12:43:18.489173 dockerd[1748]: time="2025-05-13T12:43:18.489088908Z" level=info msg="Loading containers: start." May 13 12:43:18.497162 kernel: Initializing XFRM netlink socket May 13 12:43:18.707309 systemd-networkd[1429]: docker0: Link UP May 13 12:43:18.710479 dockerd[1748]: time="2025-05-13T12:43:18.710425548Z" level=info msg="Loading containers: done." May 13 12:43:18.724733 dockerd[1748]: time="2025-05-13T12:43:18.724674108Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 12:43:18.724875 dockerd[1748]: time="2025-05-13T12:43:18.724806468Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 13 12:43:18.724947 dockerd[1748]: time="2025-05-13T12:43:18.724921068Z" level=info msg="Initializing buildkit" May 13 12:43:18.747750 dockerd[1748]: time="2025-05-13T12:43:18.747698348Z" level=info msg="Completed buildkit initialization" May 13 12:43:18.754728 dockerd[1748]: time="2025-05-13T12:43:18.754668748Z" level=info msg="Daemon has completed initialization" May 13 12:43:18.754988 dockerd[1748]: time="2025-05-13T12:43:18.754879668Z" level=info msg="API listen on /run/docker.sock" May 13 12:43:18.755086 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 12:43:19.472291 containerd[1514]: time="2025-05-13T12:43:19.472240108Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 13 12:43:20.126242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682082601.mount: Deactivated successfully. May 13 12:43:21.094183 containerd[1514]: time="2025-05-13T12:43:21.094116348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:21.094757 containerd[1514]: time="2025-05-13T12:43:21.094716708Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 13 12:43:21.096078 containerd[1514]: time="2025-05-13T12:43:21.096034948Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:21.098209 containerd[1514]: time="2025-05-13T12:43:21.098170228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:21.099781 containerd[1514]: time="2025-05-13T12:43:21.099716508Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.62743244s" May 13 12:43:21.099781 containerd[1514]: time="2025-05-13T12:43:21.099756308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 13 12:43:21.100394 containerd[1514]: time="2025-05-13T12:43:21.100316948Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 13 12:43:22.195265 containerd[1514]: time="2025-05-13T12:43:22.195217148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:22.196252 containerd[1514]: time="2025-05-13T12:43:22.196216788Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 13 12:43:22.197034 containerd[1514]: time="2025-05-13T12:43:22.196972468Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:22.199979 containerd[1514]: time="2025-05-13T12:43:22.199934908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:22.201178 containerd[1514]: time="2025-05-13T12:43:22.201068068Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.10071832s" May 13 12:43:22.201178 containerd[1514]: time="2025-05-13T12:43:22.201110348Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 13 12:43:22.201885 containerd[1514]: time="2025-05-13T12:43:22.201851428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 13 12:43:23.407279 containerd[1514]: time="2025-05-13T12:43:23.407226148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:23.408220 containerd[1514]: time="2025-05-13T12:43:23.408005348Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 13 12:43:23.408989 containerd[1514]: time="2025-05-13T12:43:23.408959548Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:23.411441 containerd[1514]: time="2025-05-13T12:43:23.411372948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:23.412565 containerd[1514]: time="2025-05-13T12:43:23.412473148Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.21058224s" May 13 12:43:23.412565 containerd[1514]: time="2025-05-13T12:43:23.412512748Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 13 12:43:23.413009 containerd[1514]: time="2025-05-13T12:43:23.412986508Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 12:43:23.675108 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 12:43:23.676848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:43:23.789878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:23.793944 (kubelet)[2028]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 12:43:23.833710 kubelet[2028]: E0513 12:43:23.833665 2028 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 12:43:23.836884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 12:43:23.837009 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 12:43:23.837323 systemd[1]: kubelet.service: Consumed 140ms CPU time, 95.1M memory peak. May 13 12:43:24.441028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979788045.mount: Deactivated successfully. May 13 12:43:24.782653 containerd[1514]: time="2025-05-13T12:43:24.782531428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:24.783792 containerd[1514]: time="2025-05-13T12:43:24.783752308Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 13 12:43:24.784707 containerd[1514]: time="2025-05-13T12:43:24.784648748Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:24.786769 containerd[1514]: time="2025-05-13T12:43:24.786710828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:24.787535 containerd[1514]: time="2025-05-13T12:43:24.787179508Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.3741622s" May 13 12:43:24.787535 containerd[1514]: time="2025-05-13T12:43:24.787214908Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 12:43:24.787758 containerd[1514]: time="2025-05-13T12:43:24.787735428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 12:43:25.309632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072664834.mount: Deactivated successfully. May 13 12:43:25.926648 containerd[1514]: time="2025-05-13T12:43:25.926597668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:25.927461 containerd[1514]: time="2025-05-13T12:43:25.927433548Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 12:43:25.928485 containerd[1514]: time="2025-05-13T12:43:25.928419868Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:25.932055 containerd[1514]: time="2025-05-13T12:43:25.930953708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:25.932241 containerd[1514]: time="2025-05-13T12:43:25.932005708Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.14423888s" May 13 12:43:25.932281 containerd[1514]: time="2025-05-13T12:43:25.932243068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 12:43:25.932792 containerd[1514]: time="2025-05-13T12:43:25.932756908Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 12:43:26.364232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065839851.mount: Deactivated successfully. May 13 12:43:26.369374 containerd[1514]: time="2025-05-13T12:43:26.369313388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:43:26.370709 containerd[1514]: time="2025-05-13T12:43:26.370668468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 12:43:26.371534 containerd[1514]: time="2025-05-13T12:43:26.371493028Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:43:26.373913 containerd[1514]: time="2025-05-13T12:43:26.373877828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 12:43:26.375231 containerd[1514]: time="2025-05-13T12:43:26.375195428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 442.40536ms" May 13 12:43:26.375276 containerd[1514]: time="2025-05-13T12:43:26.375231468Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 12:43:26.375775 containerd[1514]: time="2025-05-13T12:43:26.375740548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 13 12:43:26.832244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446094045.mount: Deactivated successfully. May 13 12:43:28.626111 containerd[1514]: time="2025-05-13T12:43:28.626057268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:28.627248 containerd[1514]: time="2025-05-13T12:43:28.627202108Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 13 12:43:28.628203 containerd[1514]: time="2025-05-13T12:43:28.628168308Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:28.631405 containerd[1514]: time="2025-05-13T12:43:28.631361868Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:28.632975 containerd[1514]: time="2025-05-13T12:43:28.632936548Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.25716196s" May 13 12:43:28.632975 containerd[1514]: time="2025-05-13T12:43:28.632971668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 13 12:43:33.748044 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:33.748341 systemd[1]: kubelet.service: Consumed 140ms CPU time, 95.1M memory peak. May 13 12:43:33.751388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:43:33.772077 systemd[1]: Reload requested from client PID 2176 ('systemctl') (unit session-7.scope)... May 13 12:43:33.772094 systemd[1]: Reloading... May 13 12:43:33.832174 zram_generator::config[2216]: No configuration found. May 13 12:43:33.942596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:43:34.027787 systemd[1]: Reloading finished in 255 ms. May 13 12:43:34.084562 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 12:43:34.084633 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 12:43:34.084918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:34.084969 systemd[1]: kubelet.service: Consumed 80ms CPU time, 82.4M memory peak. May 13 12:43:34.086469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:43:34.207748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:34.212004 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:43:34.251647 kubelet[2264]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:43:34.251647 kubelet[2264]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:43:34.251647 kubelet[2264]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:43:34.251972 kubelet[2264]: I0513 12:43:34.251765 2264 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:43:34.939826 kubelet[2264]: I0513 12:43:34.939780 2264 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 12:43:34.939826 kubelet[2264]: I0513 12:43:34.939814 2264 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:43:34.940077 kubelet[2264]: I0513 12:43:34.940049 2264 server.go:929] "Client rotation is on, will bootstrap in background" May 13 12:43:34.975256 kubelet[2264]: E0513 12:43:34.975225 2264 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 13 12:43:34.976195 kubelet[2264]: I0513 12:43:34.976122 2264 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:43:34.986093 kubelet[2264]: I0513 12:43:34.986066 2264 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:43:34.989777 kubelet[2264]: I0513 12:43:34.989754 2264 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:43:34.990073 kubelet[2264]: I0513 12:43:34.990051 2264 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 12:43:34.990242 kubelet[2264]: I0513 12:43:34.990208 2264 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:43:34.990402 kubelet[2264]: I0513 12:43:34.990232 2264 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:43:34.990540 kubelet[2264]: I0513 12:43:34.990529 2264 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:43:34.990540 kubelet[2264]: I0513 12:43:34.990541 2264 container_manager_linux.go:300] "Creating device plugin manager" May 13 12:43:34.990733 kubelet[2264]: I0513 12:43:34.990708 2264 state_mem.go:36] "Initialized new in-memory state store" May 13 12:43:34.992703 kubelet[2264]: I0513 12:43:34.992361 2264 kubelet.go:408] "Attempting to sync node with API server" May 13 12:43:34.992703 kubelet[2264]: I0513 12:43:34.992396 2264 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:43:34.992703 kubelet[2264]: I0513 12:43:34.992484 2264 kubelet.go:314] "Adding apiserver pod source" May 13 12:43:34.992703 kubelet[2264]: I0513 12:43:34.992497 2264 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:43:34.996259 kubelet[2264]: I0513 12:43:34.996156 2264 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:43:34.996684 kubelet[2264]: W0513 12:43:34.996640 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 12:43:34.996772 kubelet[2264]: E0513 12:43:34.996756 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 13 12:43:34.997248 kubelet[2264]: W0513 12:43:34.997205 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 12:43:34.997299 kubelet[2264]: E0513 12:43:34.997254 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 13 12:43:34.998772 kubelet[2264]: I0513 12:43:34.998747 2264 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:43:34.999470 kubelet[2264]: W0513 12:43:34.999448 2264 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 12:43:35.000211 kubelet[2264]: I0513 12:43:35.000197 2264 server.go:1269] "Started kubelet" May 13 12:43:35.001653 kubelet[2264]: I0513 12:43:35.001312 2264 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:43:35.001653 kubelet[2264]: I0513 12:43:35.001608 2264 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:43:35.001751 kubelet[2264]: I0513 12:43:35.001715 2264 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:43:35.002331 kubelet[2264]: I0513 12:43:35.002292 2264 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:43:35.002824 kubelet[2264]: I0513 12:43:35.002801 2264 server.go:460] "Adding debug handlers to kubelet server" May 13 12:43:35.004312 kubelet[2264]: I0513 12:43:35.004230 2264 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:43:35.004543 kubelet[2264]: E0513 12:43:35.003454 2264 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.71:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f16c4cfe27134 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 12:43:35.000166708 +0000 UTC m=+0.784817841,LastTimestamp:2025-05-13 12:43:35.000166708 +0000 UTC m=+0.784817841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 12:43:35.005120 kubelet[2264]: E0513 12:43:35.005084 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:35.005277 kubelet[2264]: I0513 12:43:35.005257 2264 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 12:43:35.005581 kubelet[2264]: I0513 12:43:35.005550 2264 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 12:43:35.005892 kubelet[2264]: I0513 12:43:35.005871 2264 factory.go:221] Registration of the systemd container factory successfully May 13 12:43:35.006038 kubelet[2264]: I0513 12:43:35.005669 2264 reconciler.go:26] "Reconciler: start to sync state" May 13 12:43:35.006038 kubelet[2264]: E0513 12:43:35.005733 2264 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:43:35.006038 kubelet[2264]: I0513 12:43:35.006008 2264 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:43:35.006118 kubelet[2264]: W0513 12:43:35.006077 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 12:43:35.006138 kubelet[2264]: E0513 12:43:35.006117 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 13 12:43:35.006138 kubelet[2264]: E0513 12:43:35.005256 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="200ms" May 13 12:43:35.008167 kubelet[2264]: I0513 12:43:35.007847 2264 factory.go:221] Registration of the containerd container factory successfully May 13 12:43:35.018934 kubelet[2264]: I0513 12:43:35.018916 2264 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:43:35.019030 kubelet[2264]: I0513 12:43:35.019020 2264 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:43:35.019078 kubelet[2264]: I0513 12:43:35.019070 2264 state_mem.go:36] "Initialized new in-memory state store" May 13 12:43:35.020030 kubelet[2264]: I0513 12:43:35.019996 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:43:35.021202 kubelet[2264]: I0513 12:43:35.021170 2264 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:43:35.021202 kubelet[2264]: I0513 12:43:35.021201 2264 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:43:35.021296 kubelet[2264]: I0513 12:43:35.021217 2264 kubelet.go:2321] "Starting kubelet main sync loop" May 13 12:43:35.021296 kubelet[2264]: E0513 12:43:35.021256 2264 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:43:35.106179 kubelet[2264]: E0513 12:43:35.106126 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:35.121370 kubelet[2264]: E0513 12:43:35.121331 2264 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 13 12:43:35.131927 kubelet[2264]: I0513 12:43:35.131792 2264 policy_none.go:49] "None policy: Start" May 13 12:43:35.132515 kubelet[2264]: I0513 12:43:35.132489 2264 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:43:35.132515 kubelet[2264]: I0513 12:43:35.132516 2264 state_mem.go:35] "Initializing new in-memory state store" May 13 12:43:35.132701 kubelet[2264]: W0513 12:43:35.132621 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 12:43:35.132766 kubelet[2264]: E0513 12:43:35.132746 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 13 12:43:35.140490 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 12:43:35.156894 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 12:43:35.159969 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 12:43:35.174023 kubelet[2264]: I0513 12:43:35.173990 2264 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:43:35.174215 kubelet[2264]: I0513 12:43:35.174195 2264 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:43:35.174261 kubelet[2264]: I0513 12:43:35.174220 2264 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:43:35.174641 kubelet[2264]: I0513 12:43:35.174451 2264 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:43:35.175634 kubelet[2264]: E0513 12:43:35.175605 2264 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 12:43:35.207230 kubelet[2264]: E0513 12:43:35.207131 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="400ms" May 13 12:43:35.275767 kubelet[2264]: I0513 12:43:35.275734 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:43:35.276216 kubelet[2264]: E0513 12:43:35.276112 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 12:43:35.328683 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 13 12:43:35.347778 systemd[1]: Created slice kubepods-burstable-pod24f02d37bf13a44f4ae1a909c0b91e21.slice - libcontainer container kubepods-burstable-pod24f02d37bf13a44f4ae1a909c0b91e21.slice. May 13 12:43:35.351245 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 13 12:43:35.407923 kubelet[2264]: I0513 12:43:35.407866 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:35.408021 kubelet[2264]: I0513 12:43:35.407925 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:35.408021 kubelet[2264]: I0513 12:43:35.407987 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 12:43:35.408118 kubelet[2264]: I0513 12:43:35.408041 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24f02d37bf13a44f4ae1a909c0b91e21-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"24f02d37bf13a44f4ae1a909c0b91e21\") " pod="kube-system/kube-apiserver-localhost" May 13 12:43:35.408118 kubelet[2264]: I0513 12:43:35.408093 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:35.408275 kubelet[2264]: I0513 12:43:35.408120 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:35.408275 kubelet[2264]: I0513 12:43:35.408137 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:35.408275 kubelet[2264]: I0513 12:43:35.408183 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24f02d37bf13a44f4ae1a909c0b91e21-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"24f02d37bf13a44f4ae1a909c0b91e21\") " pod="kube-system/kube-apiserver-localhost" May 13 12:43:35.408275 kubelet[2264]: I0513 12:43:35.408199 2264 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24f02d37bf13a44f4ae1a909c0b91e21-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"24f02d37bf13a44f4ae1a909c0b91e21\") " pod="kube-system/kube-apiserver-localhost" May 13 12:43:35.477783 kubelet[2264]: I0513 12:43:35.477706 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:43:35.477983 kubelet[2264]: E0513 12:43:35.477939 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 12:43:35.608263 kubelet[2264]: E0513 12:43:35.608217 2264 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.71:6443: connect: connection refused" interval="800ms" May 13 12:43:35.647066 containerd[1514]: time="2025-05-13T12:43:35.647015468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 13 12:43:35.650667 containerd[1514]: time="2025-05-13T12:43:35.650531948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:24f02d37bf13a44f4ae1a909c0b91e21,Namespace:kube-system,Attempt:0,}" May 13 12:43:35.654057 containerd[1514]: time="2025-05-13T12:43:35.654025108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 13 12:43:35.675939 containerd[1514]: time="2025-05-13T12:43:35.675899708Z" level=info msg="connecting to shim 81ce8d1d199a53e06bf9afe86814886ef5a931c040bcbbfed8f614b956a1f68f" address="unix:///run/containerd/s/bb15f9104298cd32fbfdf500fd9dcacea72b5030f2de166dba5afa199de403b2" namespace=k8s.io protocol=ttrpc version=3 May 13 12:43:35.677248 containerd[1514]: time="2025-05-13T12:43:35.677207708Z" level=info msg="connecting to shim 27509a6c7d96ca33d5e0f1e63ea518c45683f03d8afe6d73bdfd68f2c65381ba" address="unix:///run/containerd/s/8b15a1051d474c03588180727ed06785139ff04d1cac9e8fcd10cca9ec1fc1a4" namespace=k8s.io protocol=ttrpc version=3 May 13 12:43:35.684343 containerd[1514]: time="2025-05-13T12:43:35.684207788Z" level=info msg="connecting to shim 001c34a6bc89aebb423065de92124f69e7bdc4825d6388b82007702bec6e7110" address="unix:///run/containerd/s/853544acdcf5ba51ef9c88ee6ccb3be4d370bedf9cbde53fcbdbd70a08baccec" namespace=k8s.io protocol=ttrpc version=3 May 13 12:43:35.704303 systemd[1]: Started cri-containerd-81ce8d1d199a53e06bf9afe86814886ef5a931c040bcbbfed8f614b956a1f68f.scope - libcontainer container 81ce8d1d199a53e06bf9afe86814886ef5a931c040bcbbfed8f614b956a1f68f. May 13 12:43:35.707844 systemd[1]: Started cri-containerd-001c34a6bc89aebb423065de92124f69e7bdc4825d6388b82007702bec6e7110.scope - libcontainer container 001c34a6bc89aebb423065de92124f69e7bdc4825d6388b82007702bec6e7110. May 13 12:43:35.708846 systemd[1]: Started cri-containerd-27509a6c7d96ca33d5e0f1e63ea518c45683f03d8afe6d73bdfd68f2c65381ba.scope - libcontainer container 27509a6c7d96ca33d5e0f1e63ea518c45683f03d8afe6d73bdfd68f2c65381ba. May 13 12:43:35.744385 containerd[1514]: time="2025-05-13T12:43:35.743446748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"81ce8d1d199a53e06bf9afe86814886ef5a931c040bcbbfed8f614b956a1f68f\"" May 13 12:43:35.747690 containerd[1514]: time="2025-05-13T12:43:35.747638748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"001c34a6bc89aebb423065de92124f69e7bdc4825d6388b82007702bec6e7110\"" May 13 12:43:35.750118 containerd[1514]: time="2025-05-13T12:43:35.750087748Z" level=info msg="CreateContainer within sandbox \"81ce8d1d199a53e06bf9afe86814886ef5a931c040bcbbfed8f614b956a1f68f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 12:43:35.750733 containerd[1514]: time="2025-05-13T12:43:35.750696188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:24f02d37bf13a44f4ae1a909c0b91e21,Namespace:kube-system,Attempt:0,} returns sandbox id \"27509a6c7d96ca33d5e0f1e63ea518c45683f03d8afe6d73bdfd68f2c65381ba\"" May 13 12:43:35.751494 containerd[1514]: time="2025-05-13T12:43:35.751464748Z" level=info msg="CreateContainer within sandbox \"001c34a6bc89aebb423065de92124f69e7bdc4825d6388b82007702bec6e7110\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 12:43:35.753250 containerd[1514]: time="2025-05-13T12:43:35.753201148Z" level=info msg="CreateContainer within sandbox \"27509a6c7d96ca33d5e0f1e63ea518c45683f03d8afe6d73bdfd68f2c65381ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 12:43:35.760188 containerd[1514]: time="2025-05-13T12:43:35.760158028Z" level=info msg="Container 2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:35.762781 containerd[1514]: time="2025-05-13T12:43:35.762752908Z" level=info msg="Container affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:35.766748 containerd[1514]: time="2025-05-13T12:43:35.765885108Z" level=info msg="Container 5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:35.771808 containerd[1514]: time="2025-05-13T12:43:35.771769228Z" level=info msg="CreateContainer within sandbox \"81ce8d1d199a53e06bf9afe86814886ef5a931c040bcbbfed8f614b956a1f68f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7\"" May 13 12:43:35.772453 containerd[1514]: time="2025-05-13T12:43:35.772428268Z" level=info msg="StartContainer for \"2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7\"" May 13 12:43:35.773594 containerd[1514]: time="2025-05-13T12:43:35.773565868Z" level=info msg="connecting to shim 2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7" address="unix:///run/containerd/s/bb15f9104298cd32fbfdf500fd9dcacea72b5030f2de166dba5afa199de403b2" protocol=ttrpc version=3 May 13 12:43:35.774629 containerd[1514]: time="2025-05-13T12:43:35.774599068Z" level=info msg="CreateContainer within sandbox \"001c34a6bc89aebb423065de92124f69e7bdc4825d6388b82007702bec6e7110\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180\"" May 13 12:43:35.776169 containerd[1514]: time="2025-05-13T12:43:35.775736948Z" level=info msg="StartContainer for \"affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180\"" May 13 12:43:35.777092 containerd[1514]: time="2025-05-13T12:43:35.777044228Z" level=info msg="CreateContainer within sandbox \"27509a6c7d96ca33d5e0f1e63ea518c45683f03d8afe6d73bdfd68f2c65381ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854\"" May 13 12:43:35.777483 containerd[1514]: time="2025-05-13T12:43:35.777455108Z" level=info msg="StartContainer for \"5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854\"" May 13 12:43:35.778200 containerd[1514]: time="2025-05-13T12:43:35.778165108Z" level=info msg="connecting to shim affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180" address="unix:///run/containerd/s/853544acdcf5ba51ef9c88ee6ccb3be4d370bedf9cbde53fcbdbd70a08baccec" protocol=ttrpc version=3 May 13 12:43:35.778417 containerd[1514]: time="2025-05-13T12:43:35.778391468Z" level=info msg="connecting to shim 5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854" address="unix:///run/containerd/s/8b15a1051d474c03588180727ed06785139ff04d1cac9e8fcd10cca9ec1fc1a4" protocol=ttrpc version=3 May 13 12:43:35.792305 systemd[1]: Started cri-containerd-2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7.scope - libcontainer container 2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7. May 13 12:43:35.795695 systemd[1]: Started cri-containerd-5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854.scope - libcontainer container 5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854. May 13 12:43:35.796625 systemd[1]: Started cri-containerd-affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180.scope - libcontainer container affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180. May 13 12:43:35.837933 containerd[1514]: time="2025-05-13T12:43:35.834085268Z" level=info msg="StartContainer for \"2e07049624e9da0c814c183fcfad357e0dd07198a9700d69e21ba45ba0bca0a7\" returns successfully" May 13 12:43:35.853882 containerd[1514]: time="2025-05-13T12:43:35.853620388Z" level=info msg="StartContainer for \"5ef3f3bb014b87a67a82d2e549c4e11fdd814d43459ebd58a9141dcaab059854\" returns successfully" May 13 12:43:35.859954 containerd[1514]: time="2025-05-13T12:43:35.859813668Z" level=info msg="StartContainer for \"affaae596bb559c38dfef32ff31230e8aefc00e4af935e5e5d6b6a3158529180\" returns successfully" May 13 12:43:35.886621 kubelet[2264]: I0513 12:43:35.883752 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:43:35.886621 kubelet[2264]: E0513 12:43:35.884075 2264 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.71:6443/api/v1/nodes\": dial tcp 10.0.0.71:6443: connect: connection refused" node="localhost" May 13 12:43:35.991257 kubelet[2264]: W0513 12:43:35.989457 2264 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.71:6443: connect: connection refused May 13 12:43:35.991257 kubelet[2264]: E0513 12:43:35.989535 2264 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.71:6443: connect: connection refused" logger="UnhandledError" May 13 12:43:36.685260 kubelet[2264]: I0513 12:43:36.685226 2264 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:43:37.464009 kubelet[2264]: E0513 12:43:37.463967 2264 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 12:43:37.542153 kubelet[2264]: I0513 12:43:37.541847 2264 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 12:43:37.542153 kubelet[2264]: E0513 12:43:37.541890 2264 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 12:43:37.560270 kubelet[2264]: E0513 12:43:37.560236 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:37.660805 kubelet[2264]: E0513 12:43:37.660763 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:37.761457 kubelet[2264]: E0513 12:43:37.761360 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:37.861968 kubelet[2264]: E0513 12:43:37.861916 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:37.962481 kubelet[2264]: E0513 12:43:37.962412 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.063196 kubelet[2264]: E0513 12:43:38.063059 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.163694 kubelet[2264]: E0513 12:43:38.163646 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.264429 kubelet[2264]: E0513 12:43:38.264385 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.364971 kubelet[2264]: E0513 12:43:38.364879 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.465512 kubelet[2264]: E0513 12:43:38.465473 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.566577 kubelet[2264]: E0513 12:43:38.566527 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.667195 kubelet[2264]: E0513 12:43:38.667067 2264 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:38.995892 kubelet[2264]: I0513 12:43:38.995762 2264 apiserver.go:52] "Watching apiserver" May 13 12:43:39.006187 kubelet[2264]: I0513 12:43:39.006161 2264 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 12:43:39.668247 systemd[1]: Reload requested from client PID 2539 ('systemctl') (unit session-7.scope)... May 13 12:43:39.668262 systemd[1]: Reloading... May 13 12:43:39.738247 zram_generator::config[2582]: No configuration found. May 13 12:43:39.808047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 12:43:39.905912 systemd[1]: Reloading finished in 237 ms. May 13 12:43:39.939710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:43:39.957135 systemd[1]: kubelet.service: Deactivated successfully. May 13 12:43:39.957402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:39.957457 systemd[1]: kubelet.service: Consumed 1.153s CPU time, 115.7M memory peak. May 13 12:43:39.959188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 12:43:40.083252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 12:43:40.087013 (kubelet)[2624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 12:43:40.127817 kubelet[2624]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:43:40.127817 kubelet[2624]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 12:43:40.128740 kubelet[2624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 12:43:40.128740 kubelet[2624]: I0513 12:43:40.128247 2624 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 12:43:40.134521 kubelet[2624]: I0513 12:43:40.134484 2624 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 12:43:40.134521 kubelet[2624]: I0513 12:43:40.134509 2624 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 12:43:40.134749 kubelet[2624]: I0513 12:43:40.134722 2624 server.go:929] "Client rotation is on, will bootstrap in background" May 13 12:43:40.136563 kubelet[2624]: I0513 12:43:40.136521 2624 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 12:43:40.139625 kubelet[2624]: I0513 12:43:40.139592 2624 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 12:43:40.143024 kubelet[2624]: I0513 12:43:40.143004 2624 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 13 12:43:40.145491 kubelet[2624]: I0513 12:43:40.145410 2624 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 12:43:40.145568 kubelet[2624]: I0513 12:43:40.145526 2624 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 12:43:40.145634 kubelet[2624]: I0513 12:43:40.145606 2624 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 12:43:40.145788 kubelet[2624]: I0513 12:43:40.145633 2624 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 12:43:40.145859 kubelet[2624]: I0513 12:43:40.145799 2624 topology_manager.go:138] "Creating topology manager with none policy" May 13 12:43:40.145859 kubelet[2624]: I0513 12:43:40.145807 2624 container_manager_linux.go:300] "Creating device plugin manager" May 13 12:43:40.145859 kubelet[2624]: I0513 12:43:40.145836 2624 state_mem.go:36] "Initialized new in-memory state store" May 13 12:43:40.145955 kubelet[2624]: I0513 12:43:40.145943 2624 kubelet.go:408] "Attempting to sync node with API server" May 13 12:43:40.145979 kubelet[2624]: I0513 12:43:40.145958 2624 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 12:43:40.146167 kubelet[2624]: I0513 12:43:40.145980 2624 kubelet.go:314] "Adding apiserver pod source" May 13 12:43:40.146167 kubelet[2624]: I0513 12:43:40.146007 2624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 12:43:40.147314 kubelet[2624]: I0513 12:43:40.147286 2624 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 13 12:43:40.147824 kubelet[2624]: I0513 12:43:40.147800 2624 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 12:43:40.148177 kubelet[2624]: I0513 12:43:40.148158 2624 server.go:1269] "Started kubelet" May 13 12:43:40.150155 kubelet[2624]: I0513 12:43:40.150097 2624 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 12:43:40.150433 kubelet[2624]: I0513 12:43:40.150383 2624 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 12:43:40.150699 kubelet[2624]: I0513 12:43:40.150681 2624 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 12:43:40.151908 kubelet[2624]: I0513 12:43:40.151707 2624 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 12:43:40.152544 kubelet[2624]: I0513 12:43:40.152494 2624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 12:43:40.154359 kubelet[2624]: E0513 12:43:40.154315 2624 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 12:43:40.154424 kubelet[2624]: I0513 12:43:40.154376 2624 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 12:43:40.154597 kubelet[2624]: I0513 12:43:40.154575 2624 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 12:43:40.154741 kubelet[2624]: I0513 12:43:40.154726 2624 reconciler.go:26] "Reconciler: start to sync state" May 13 12:43:40.157896 kubelet[2624]: E0513 12:43:40.157872 2624 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 12:43:40.158046 kubelet[2624]: I0513 12:43:40.158024 2624 factory.go:221] Registration of the containerd container factory successfully May 13 12:43:40.158046 kubelet[2624]: I0513 12:43:40.158041 2624 factory.go:221] Registration of the systemd container factory successfully May 13 12:43:40.159400 kubelet[2624]: I0513 12:43:40.158120 2624 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 12:43:40.173838 kubelet[2624]: I0513 12:43:40.173807 2624 server.go:460] "Adding debug handlers to kubelet server" May 13 12:43:40.186060 kubelet[2624]: I0513 12:43:40.186021 2624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 12:43:40.187473 kubelet[2624]: I0513 12:43:40.187441 2624 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 12:43:40.187473 kubelet[2624]: I0513 12:43:40.187475 2624 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 12:43:40.187566 kubelet[2624]: I0513 12:43:40.187507 2624 kubelet.go:2321] "Starting kubelet main sync loop" May 13 12:43:40.187589 kubelet[2624]: E0513 12:43:40.187547 2624 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 12:43:40.200476 kubelet[2624]: I0513 12:43:40.200402 2624 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 12:43:40.200892 kubelet[2624]: I0513 12:43:40.200582 2624 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 12:43:40.200892 kubelet[2624]: I0513 12:43:40.200650 2624 state_mem.go:36] "Initialized new in-memory state store" May 13 12:43:40.200892 kubelet[2624]: I0513 12:43:40.200800 2624 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 12:43:40.200892 kubelet[2624]: I0513 12:43:40.200811 2624 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 12:43:40.200892 kubelet[2624]: I0513 12:43:40.200827 2624 policy_none.go:49] "None policy: Start" May 13 12:43:40.202239 kubelet[2624]: I0513 12:43:40.201654 2624 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 12:43:40.202239 kubelet[2624]: I0513 12:43:40.202242 2624 state_mem.go:35] "Initializing new in-memory state store" May 13 12:43:40.202421 kubelet[2624]: I0513 12:43:40.202392 2624 state_mem.go:75] "Updated machine memory state" May 13 12:43:40.207102 kubelet[2624]: I0513 12:43:40.207072 2624 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 12:43:40.207585 kubelet[2624]: I0513 12:43:40.207518 2624 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 12:43:40.207585 kubelet[2624]: I0513 12:43:40.207538 2624 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 12:43:40.208013 kubelet[2624]: I0513 12:43:40.207793 2624 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 12:43:40.294873 kubelet[2624]: E0513 12:43:40.294745 2624 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:43:40.309151 kubelet[2624]: I0513 12:43:40.309106 2624 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 13 12:43:40.316996 kubelet[2624]: I0513 12:43:40.316957 2624 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 13 12:43:40.317191 kubelet[2624]: I0513 12:43:40.317169 2624 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 13 12:43:40.455292 kubelet[2624]: I0513 12:43:40.455111 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:40.455292 kubelet[2624]: I0513 12:43:40.455166 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 13 12:43:40.455292 kubelet[2624]: I0513 12:43:40.455187 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/24f02d37bf13a44f4ae1a909c0b91e21-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"24f02d37bf13a44f4ae1a909c0b91e21\") " pod="kube-system/kube-apiserver-localhost" May 13 12:43:40.455292 kubelet[2624]: I0513 12:43:40.455203 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/24f02d37bf13a44f4ae1a909c0b91e21-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"24f02d37bf13a44f4ae1a909c0b91e21\") " pod="kube-system/kube-apiserver-localhost" May 13 12:43:40.455292 kubelet[2624]: I0513 12:43:40.455224 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:40.455489 kubelet[2624]: I0513 12:43:40.455248 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:40.455489 kubelet[2624]: I0513 12:43:40.455263 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/24f02d37bf13a44f4ae1a909c0b91e21-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"24f02d37bf13a44f4ae1a909c0b91e21\") " pod="kube-system/kube-apiserver-localhost" May 13 12:43:40.455489 kubelet[2624]: I0513 12:43:40.455279 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:40.455489 kubelet[2624]: I0513 12:43:40.455307 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 13 12:43:40.672264 sudo[2656]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 12:43:40.672534 sudo[2656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 12:43:41.127081 sudo[2656]: pam_unix(sudo:session): session closed for user root May 13 12:43:41.148151 kubelet[2624]: I0513 12:43:41.148096 2624 apiserver.go:52] "Watching apiserver" May 13 12:43:41.155349 kubelet[2624]: I0513 12:43:41.155042 2624 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 12:43:41.205637 kubelet[2624]: E0513 12:43:41.205507 2624 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 12:43:41.230620 kubelet[2624]: I0513 12:43:41.230549 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.230011108 podStartE2EDuration="2.230011108s" podCreationTimestamp="2025-05-13 12:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:43:41.221474308 +0000 UTC m=+1.131687641" watchObservedRunningTime="2025-05-13 12:43:41.230011108 +0000 UTC m=+1.140224401" May 13 12:43:41.239006 kubelet[2624]: I0513 12:43:41.238948 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.238919148 podStartE2EDuration="1.238919148s" podCreationTimestamp="2025-05-13 12:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:43:41.230791588 +0000 UTC m=+1.141004921" watchObservedRunningTime="2025-05-13 12:43:41.238919148 +0000 UTC m=+1.149132441" May 13 12:43:41.239183 kubelet[2624]: I0513 12:43:41.239034 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.239027748 podStartE2EDuration="1.239027748s" podCreationTimestamp="2025-05-13 12:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:43:41.237714828 +0000 UTC m=+1.147928161" watchObservedRunningTime="2025-05-13 12:43:41.239027748 +0000 UTC m=+1.149241081" May 13 12:43:42.544936 sudo[1726]: pam_unix(sudo:session): session closed for user root May 13 12:43:42.546336 sshd[1725]: Connection closed by 10.0.0.1 port 55046 May 13 12:43:42.546832 sshd-session[1722]: pam_unix(sshd:session): session closed for user core May 13 12:43:42.550494 systemd[1]: sshd@6-10.0.0.71:22-10.0.0.1:55046.service: Deactivated successfully. May 13 12:43:42.552296 systemd[1]: session-7.scope: Deactivated successfully. May 13 12:43:42.553027 systemd[1]: session-7.scope: Consumed 6.992s CPU time, 267.8M memory peak. May 13 12:43:42.554167 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. May 13 12:43:42.555236 systemd-logind[1500]: Removed session 7. May 13 12:43:45.335155 kubelet[2624]: I0513 12:43:45.335111 2624 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 12:43:45.335728 kubelet[2624]: I0513 12:43:45.335689 2624 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 12:43:45.335765 containerd[1514]: time="2025-05-13T12:43:45.335497284Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 12:43:46.251504 systemd[1]: Created slice kubepods-besteffort-pod60d87868_6adb_485d_9960_3459277c4903.slice - libcontainer container kubepods-besteffort-pod60d87868_6adb_485d_9960_3459277c4903.slice. May 13 12:43:46.267095 systemd[1]: Created slice kubepods-burstable-podbe117f66_880a_4e5b_9087_43617c3621e1.slice - libcontainer container kubepods-burstable-podbe117f66_880a_4e5b_9087_43617c3621e1.slice. May 13 12:43:46.293292 kubelet[2624]: I0513 12:43:46.293244 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be117f66-880a-4e5b-9087-43617c3621e1-clustermesh-secrets\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293292 kubelet[2624]: I0513 12:43:46.293285 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-hubble-tls\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293447 kubelet[2624]: I0513 12:43:46.293304 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60d87868-6adb-485d-9960-3459277c4903-xtables-lock\") pod \"kube-proxy-hcxkz\" (UID: \"60d87868-6adb-485d-9960-3459277c4903\") " pod="kube-system/kube-proxy-hcxkz" May 13 12:43:46.293447 kubelet[2624]: I0513 12:43:46.293321 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-run\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293447 kubelet[2624]: I0513 12:43:46.293346 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-bpf-maps\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293518 kubelet[2624]: I0513 12:43:46.293407 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-etc-cni-netd\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293518 kubelet[2624]: I0513 12:43:46.293507 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-xtables-lock\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293559 kubelet[2624]: I0513 12:43:46.293533 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jrft\" (UniqueName: \"kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-kube-api-access-4jrft\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293559 kubelet[2624]: I0513 12:43:46.293551 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60d87868-6adb-485d-9960-3459277c4903-kube-proxy\") pod \"kube-proxy-hcxkz\" (UID: \"60d87868-6adb-485d-9960-3459277c4903\") " pod="kube-system/kube-proxy-hcxkz" May 13 12:43:46.293600 kubelet[2624]: I0513 12:43:46.293568 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghbdp\" (UniqueName: \"kubernetes.io/projected/60d87868-6adb-485d-9960-3459277c4903-kube-api-access-ghbdp\") pod \"kube-proxy-hcxkz\" (UID: \"60d87868-6adb-485d-9960-3459277c4903\") " pod="kube-system/kube-proxy-hcxkz" May 13 12:43:46.293642 kubelet[2624]: I0513 12:43:46.293624 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be117f66-880a-4e5b-9087-43617c3621e1-cilium-config-path\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293667 kubelet[2624]: I0513 12:43:46.293643 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60d87868-6adb-485d-9960-3459277c4903-lib-modules\") pod \"kube-proxy-hcxkz\" (UID: \"60d87868-6adb-485d-9960-3459277c4903\") " pod="kube-system/kube-proxy-hcxkz" May 13 12:43:46.293667 kubelet[2624]: I0513 12:43:46.293660 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-hostproc\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293714 kubelet[2624]: I0513 12:43:46.293674 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-cgroup\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293714 kubelet[2624]: I0513 12:43:46.293691 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-lib-modules\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293714 kubelet[2624]: I0513 12:43:46.293708 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cni-path\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293771 kubelet[2624]: I0513 12:43:46.293726 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-net\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.293771 kubelet[2624]: I0513 12:43:46.293741 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-kernel\") pod \"cilium-bqrs9\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " pod="kube-system/cilium-bqrs9" May 13 12:43:46.445020 systemd[1]: Created slice kubepods-besteffort-pod4273a4e3_f4b7_4f4c_b408_bf73be0c0850.slice - libcontainer container kubepods-besteffort-pod4273a4e3_f4b7_4f4c_b408_bf73be0c0850.slice. May 13 12:43:46.495451 kubelet[2624]: I0513 12:43:46.495384 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-cilium-config-path\") pod \"cilium-operator-5d85765b45-jctgj\" (UID: \"4273a4e3-f4b7-4f4c-b408-bf73be0c0850\") " pod="kube-system/cilium-operator-5d85765b45-jctgj" May 13 12:43:46.495451 kubelet[2624]: I0513 12:43:46.495434 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7v4g\" (UniqueName: \"kubernetes.io/projected/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-kube-api-access-d7v4g\") pod \"cilium-operator-5d85765b45-jctgj\" (UID: \"4273a4e3-f4b7-4f4c-b408-bf73be0c0850\") " pod="kube-system/cilium-operator-5d85765b45-jctgj" May 13 12:43:46.564641 containerd[1514]: time="2025-05-13T12:43:46.564539516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcxkz,Uid:60d87868-6adb-485d-9960-3459277c4903,Namespace:kube-system,Attempt:0,}" May 13 12:43:46.573500 containerd[1514]: time="2025-05-13T12:43:46.573456778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqrs9,Uid:be117f66-880a-4e5b-9087-43617c3621e1,Namespace:kube-system,Attempt:0,}" May 13 12:43:46.580918 containerd[1514]: time="2025-05-13T12:43:46.580885110Z" level=info msg="connecting to shim 13b814fab5cc87d6d61d66e7ce1247e40fcab5f3633043cdc224f07b6b8d8b90" address="unix:///run/containerd/s/9bc1d60449aef6c5a15308e37d4414a4a73a5c3ec818237251c031a9f4cc9224" namespace=k8s.io protocol=ttrpc version=3 May 13 12:43:46.592151 containerd[1514]: time="2025-05-13T12:43:46.592097709Z" level=info msg="connecting to shim 27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e" address="unix:///run/containerd/s/d1a83038cd098835f597edd85faa6710ea7725d3d145409f79a82874a30e66b7" namespace=k8s.io protocol=ttrpc version=3 May 13 12:43:46.607303 systemd[1]: Started cri-containerd-13b814fab5cc87d6d61d66e7ce1247e40fcab5f3633043cdc224f07b6b8d8b90.scope - libcontainer container 13b814fab5cc87d6d61d66e7ce1247e40fcab5f3633043cdc224f07b6b8d8b90. May 13 12:43:46.611989 systemd[1]: Started cri-containerd-27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e.scope - libcontainer container 27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e. May 13 12:43:46.632502 containerd[1514]: time="2025-05-13T12:43:46.632463431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcxkz,Uid:60d87868-6adb-485d-9960-3459277c4903,Namespace:kube-system,Attempt:0,} returns sandbox id \"13b814fab5cc87d6d61d66e7ce1247e40fcab5f3633043cdc224f07b6b8d8b90\"" May 13 12:43:46.635999 containerd[1514]: time="2025-05-13T12:43:46.635374292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bqrs9,Uid:be117f66-880a-4e5b-9087-43617c3621e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\"" May 13 12:43:46.636787 containerd[1514]: time="2025-05-13T12:43:46.636755381Z" level=info msg="CreateContainer within sandbox \"13b814fab5cc87d6d61d66e7ce1247e40fcab5f3633043cdc224f07b6b8d8b90\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 12:43:46.638490 containerd[1514]: time="2025-05-13T12:43:46.638445673Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 12:43:46.647546 containerd[1514]: time="2025-05-13T12:43:46.647357535Z" level=info msg="Container 5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:46.653663 containerd[1514]: time="2025-05-13T12:43:46.653569379Z" level=info msg="CreateContainer within sandbox \"13b814fab5cc87d6d61d66e7ce1247e40fcab5f3633043cdc224f07b6b8d8b90\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef\"" May 13 12:43:46.654750 containerd[1514]: time="2025-05-13T12:43:46.654318864Z" level=info msg="StartContainer for \"5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef\"" May 13 12:43:46.655612 containerd[1514]: time="2025-05-13T12:43:46.655578713Z" level=info msg="connecting to shim 5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef" address="unix:///run/containerd/s/9bc1d60449aef6c5a15308e37d4414a4a73a5c3ec818237251c031a9f4cc9224" protocol=ttrpc version=3 May 13 12:43:46.686374 systemd[1]: Started cri-containerd-5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef.scope - libcontainer container 5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef. May 13 12:43:46.719970 containerd[1514]: time="2025-05-13T12:43:46.719928563Z" level=info msg="StartContainer for \"5cae1c303d4d4ed40cacfee1f324fdaa0f3322870664b30163c39f0d163e1cef\" returns successfully" May 13 12:43:46.749017 containerd[1514]: time="2025-05-13T12:43:46.748941086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jctgj,Uid:4273a4e3-f4b7-4f4c-b408-bf73be0c0850,Namespace:kube-system,Attempt:0,}" May 13 12:43:46.768289 containerd[1514]: time="2025-05-13T12:43:46.768129741Z" level=info msg="connecting to shim 0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450" address="unix:///run/containerd/s/216149c392e3d596bd2bff381bd38625c90c97ed94da723ac5e828c99272fb32" namespace=k8s.io protocol=ttrpc version=3 May 13 12:43:46.792312 systemd[1]: Started cri-containerd-0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450.scope - libcontainer container 0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450. May 13 12:43:46.834590 containerd[1514]: time="2025-05-13T12:43:46.834278884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-jctgj,Uid:4273a4e3-f4b7-4f4c-b408-bf73be0c0850,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\"" May 13 12:43:47.221073 kubelet[2624]: I0513 12:43:47.220943 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcxkz" podStartSLOduration=1.220920173 podStartE2EDuration="1.220920173s" podCreationTimestamp="2025-05-13 12:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:43:47.220921013 +0000 UTC m=+7.131134386" watchObservedRunningTime="2025-05-13 12:43:47.220920173 +0000 UTC m=+7.131133506" May 13 12:43:52.769531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133168.mount: Deactivated successfully. May 13 12:43:55.839978 containerd[1514]: time="2025-05-13T12:43:55.839922629Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:55.840594 containerd[1514]: time="2025-05-13T12:43:55.840565791Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 12:43:55.841165 containerd[1514]: time="2025-05-13T12:43:55.841126514Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:55.842602 containerd[1514]: time="2025-05-13T12:43:55.842568679Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.204086486s" May 13 12:43:55.842684 containerd[1514]: time="2025-05-13T12:43:55.842603719Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 12:43:55.854518 containerd[1514]: time="2025-05-13T12:43:55.854482326Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 12:43:55.863057 containerd[1514]: time="2025-05-13T12:43:55.862983879Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:43:55.869090 containerd[1514]: time="2025-05-13T12:43:55.869054663Z" level=info msg="Container 0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:55.872339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102519329.mount: Deactivated successfully. May 13 12:43:55.874587 containerd[1514]: time="2025-05-13T12:43:55.874552565Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\"" May 13 12:43:55.877043 containerd[1514]: time="2025-05-13T12:43:55.876978014Z" level=info msg="StartContainer for \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\"" May 13 12:43:55.877950 containerd[1514]: time="2025-05-13T12:43:55.877916978Z" level=info msg="connecting to shim 0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd" address="unix:///run/containerd/s/d1a83038cd098835f597edd85faa6710ea7725d3d145409f79a82874a30e66b7" protocol=ttrpc version=3 May 13 12:43:55.890255 update_engine[1502]: I20250513 12:43:55.890187 1502 update_attempter.cc:509] Updating boot flags... May 13 12:43:55.924306 systemd[1]: Started cri-containerd-0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd.scope - libcontainer container 0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd. May 13 12:43:56.071251 containerd[1514]: time="2025-05-13T12:43:56.071207317Z" level=info msg="StartContainer for \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" returns successfully" May 13 12:43:56.086344 systemd[1]: cri-containerd-0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd.scope: Deactivated successfully. May 13 12:43:56.086855 systemd[1]: cri-containerd-0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd.scope: Consumed 61ms CPU time, 6.8M memory peak, 1.6M read from disk. May 13 12:43:56.127602 containerd[1514]: time="2025-05-13T12:43:56.127538644Z" level=info msg="received exit event container_id:\"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" id:\"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" pid:3052 exited_at:{seconds:1747140236 nanos:113739633}" May 13 12:43:56.132788 containerd[1514]: time="2025-05-13T12:43:56.132745783Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" id:\"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" pid:3052 exited_at:{seconds:1747140236 nanos:113739633}" May 13 12:43:56.171102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd-rootfs.mount: Deactivated successfully. May 13 12:43:56.945578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712322977.mount: Deactivated successfully. May 13 12:43:57.261215 containerd[1514]: time="2025-05-13T12:43:57.261106866Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:43:57.270258 containerd[1514]: time="2025-05-13T12:43:57.270218137Z" level=info msg="Container cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:57.271238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180495637.mount: Deactivated successfully. May 13 12:43:57.287531 containerd[1514]: time="2025-05-13T12:43:57.287487237Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\"" May 13 12:43:57.288090 containerd[1514]: time="2025-05-13T12:43:57.287986758Z" level=info msg="StartContainer for \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\"" May 13 12:43:57.288817 containerd[1514]: time="2025-05-13T12:43:57.288792721Z" level=info msg="connecting to shim cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047" address="unix:///run/containerd/s/d1a83038cd098835f597edd85faa6710ea7725d3d145409f79a82874a30e66b7" protocol=ttrpc version=3 May 13 12:43:57.311315 systemd[1]: Started cri-containerd-cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047.scope - libcontainer container cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047. May 13 12:43:57.334794 containerd[1514]: time="2025-05-13T12:43:57.334741239Z" level=info msg="StartContainer for \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" returns successfully" May 13 12:43:57.349954 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 12:43:57.350356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 12:43:57.350672 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 12:43:57.352536 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 12:43:57.354117 systemd[1]: cri-containerd-cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047.scope: Deactivated successfully. May 13 12:43:57.356232 containerd[1514]: time="2025-05-13T12:43:57.356191753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" id:\"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" pid:3109 exited_at:{seconds:1747140237 nanos:355793272}" May 13 12:43:57.363178 containerd[1514]: time="2025-05-13T12:43:57.363114017Z" level=info msg="received exit event container_id:\"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" id:\"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" pid:3109 exited_at:{seconds:1747140237 nanos:355793272}" May 13 12:43:57.377906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 12:43:57.942512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047-rootfs.mount: Deactivated successfully. May 13 12:43:58.267447 containerd[1514]: time="2025-05-13T12:43:58.267108630Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:43:58.329873 containerd[1514]: time="2025-05-13T12:43:58.329833873Z" level=info msg="Container 4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:58.333611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2378305497.mount: Deactivated successfully. May 13 12:43:58.338113 containerd[1514]: time="2025-05-13T12:43:58.337996979Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\"" May 13 12:43:58.338501 containerd[1514]: time="2025-05-13T12:43:58.338477780Z" level=info msg="StartContainer for \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\"" May 13 12:43:58.339777 containerd[1514]: time="2025-05-13T12:43:58.339718584Z" level=info msg="connecting to shim 4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a" address="unix:///run/containerd/s/d1a83038cd098835f597edd85faa6710ea7725d3d145409f79a82874a30e66b7" protocol=ttrpc version=3 May 13 12:43:58.368359 systemd[1]: Started cri-containerd-4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a.scope - libcontainer container 4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a. May 13 12:43:58.406357 containerd[1514]: time="2025-05-13T12:43:58.406219639Z" level=info msg="StartContainer for \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" returns successfully" May 13 12:43:58.419830 systemd[1]: cri-containerd-4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a.scope: Deactivated successfully. May 13 12:43:58.422389 containerd[1514]: time="2025-05-13T12:43:58.422356411Z" level=info msg="received exit event container_id:\"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" id:\"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" pid:3162 exited_at:{seconds:1747140238 nanos:422128850}" May 13 12:43:58.422635 containerd[1514]: time="2025-05-13T12:43:58.422408891Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" id:\"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" pid:3162 exited_at:{seconds:1747140238 nanos:422128850}" May 13 12:43:58.440492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a-rootfs.mount: Deactivated successfully. May 13 12:43:58.540715 containerd[1514]: time="2025-05-13T12:43:58.540616633Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:58.542086 containerd[1514]: time="2025-05-13T12:43:58.542051117Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 12:43:58.543181 containerd[1514]: time="2025-05-13T12:43:58.543125041Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 12:43:58.552542 containerd[1514]: time="2025-05-13T12:43:58.552512071Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.697994185s" May 13 12:43:58.552697 containerd[1514]: time="2025-05-13T12:43:58.552542471Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 12:43:58.554776 containerd[1514]: time="2025-05-13T12:43:58.554748758Z" level=info msg="CreateContainer within sandbox \"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 12:43:58.561404 containerd[1514]: time="2025-05-13T12:43:58.561372339Z" level=info msg="Container f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:58.566239 containerd[1514]: time="2025-05-13T12:43:58.566210075Z" level=info msg="CreateContainer within sandbox \"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\"" May 13 12:43:58.566675 containerd[1514]: time="2025-05-13T12:43:58.566635636Z" level=info msg="StartContainer for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\"" May 13 12:43:58.567453 containerd[1514]: time="2025-05-13T12:43:58.567430119Z" level=info msg="connecting to shim f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7" address="unix:///run/containerd/s/216149c392e3d596bd2bff381bd38625c90c97ed94da723ac5e828c99272fb32" protocol=ttrpc version=3 May 13 12:43:58.587274 systemd[1]: Started cri-containerd-f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7.scope - libcontainer container f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7. May 13 12:43:58.609496 containerd[1514]: time="2025-05-13T12:43:58.609468735Z" level=info msg="StartContainer for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" returns successfully" May 13 12:43:59.272569 containerd[1514]: time="2025-05-13T12:43:59.272428219Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:43:59.277506 kubelet[2624]: I0513 12:43:59.277440 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-jctgj" podStartSLOduration=1.560027935 podStartE2EDuration="13.277424634s" podCreationTimestamp="2025-05-13 12:43:46 +0000 UTC" firstStartedPulling="2025-05-13 12:43:46.835598813 +0000 UTC m=+6.745812146" lastFinishedPulling="2025-05-13 12:43:58.552995512 +0000 UTC m=+18.463208845" observedRunningTime="2025-05-13 12:43:59.277340313 +0000 UTC m=+19.187553646" watchObservedRunningTime="2025-05-13 12:43:59.277424634 +0000 UTC m=+19.187637967" May 13 12:43:59.284290 containerd[1514]: time="2025-05-13T12:43:59.283249091Z" level=info msg="Container dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c: CDI devices from CRI Config.CDIDevices: []" May 13 12:43:59.293421 containerd[1514]: time="2025-05-13T12:43:59.293380642Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\"" May 13 12:43:59.295119 containerd[1514]: time="2025-05-13T12:43:59.295084047Z" level=info msg="StartContainer for \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\"" May 13 12:43:59.295874 containerd[1514]: time="2025-05-13T12:43:59.295834089Z" level=info msg="connecting to shim dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c" address="unix:///run/containerd/s/d1a83038cd098835f597edd85faa6710ea7725d3d145409f79a82874a30e66b7" protocol=ttrpc version=3 May 13 12:43:59.314308 systemd[1]: Started cri-containerd-dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c.scope - libcontainer container dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c. May 13 12:43:59.338177 systemd[1]: cri-containerd-dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c.scope: Deactivated successfully. May 13 12:43:59.339246 containerd[1514]: time="2025-05-13T12:43:59.339044020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" id:\"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" pid:3243 exited_at:{seconds:1747140239 nanos:338336218}" May 13 12:43:59.339661 containerd[1514]: time="2025-05-13T12:43:59.339613662Z" level=info msg="received exit event container_id:\"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" id:\"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" pid:3243 exited_at:{seconds:1747140239 nanos:338336218}" May 13 12:43:59.348767 containerd[1514]: time="2025-05-13T12:43:59.348730249Z" level=info msg="StartContainer for \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" returns successfully" May 13 12:43:59.369485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c-rootfs.mount: Deactivated successfully. May 13 12:44:00.279656 containerd[1514]: time="2025-05-13T12:44:00.279225453Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:44:00.289173 containerd[1514]: time="2025-05-13T12:44:00.288812480Z" level=info msg="Container 3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884: CDI devices from CRI Config.CDIDevices: []" May 13 12:44:00.292172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554327500.mount: Deactivated successfully. May 13 12:44:00.296541 containerd[1514]: time="2025-05-13T12:44:00.296509462Z" level=info msg="CreateContainer within sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\"" May 13 12:44:00.297160 containerd[1514]: time="2025-05-13T12:44:00.297076144Z" level=info msg="StartContainer for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\"" May 13 12:44:00.297873 containerd[1514]: time="2025-05-13T12:44:00.297847626Z" level=info msg="connecting to shim 3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884" address="unix:///run/containerd/s/d1a83038cd098835f597edd85faa6710ea7725d3d145409f79a82874a30e66b7" protocol=ttrpc version=3 May 13 12:44:00.315275 systemd[1]: Started cri-containerd-3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884.scope - libcontainer container 3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884. May 13 12:44:00.367631 containerd[1514]: time="2025-05-13T12:44:00.366272060Z" level=info msg="StartContainer for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" returns successfully" May 13 12:44:00.457356 containerd[1514]: time="2025-05-13T12:44:00.457301398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" id:\"b6ece988ed6ffc2d4b1067194e44bdd63888e5cbfdf81f3cd2b71bab1dbd8b55\" pid:3310 exited_at:{seconds:1747140240 nanos:457036157}" May 13 12:44:00.499657 kubelet[2624]: I0513 12:44:00.499617 2624 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 12:44:00.535680 systemd[1]: Created slice kubepods-burstable-pod19eae7dd_15e5_4c04_8d6d_f930ee45bcee.slice - libcontainer container kubepods-burstable-pod19eae7dd_15e5_4c04_8d6d_f930ee45bcee.slice. May 13 12:44:00.542837 systemd[1]: Created slice kubepods-burstable-pod20ded834_131c_47bb_b954_98c39e1b325d.slice - libcontainer container kubepods-burstable-pod20ded834_131c_47bb_b954_98c39e1b325d.slice. May 13 12:44:00.687650 kubelet[2624]: I0513 12:44:00.687615 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6chwb\" (UniqueName: \"kubernetes.io/projected/19eae7dd-15e5-4c04-8d6d-f930ee45bcee-kube-api-access-6chwb\") pod \"coredns-6f6b679f8f-6c7ct\" (UID: \"19eae7dd-15e5-4c04-8d6d-f930ee45bcee\") " pod="kube-system/coredns-6f6b679f8f-6c7ct" May 13 12:44:00.687650 kubelet[2624]: I0513 12:44:00.687657 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19eae7dd-15e5-4c04-8d6d-f930ee45bcee-config-volume\") pod \"coredns-6f6b679f8f-6c7ct\" (UID: \"19eae7dd-15e5-4c04-8d6d-f930ee45bcee\") " pod="kube-system/coredns-6f6b679f8f-6c7ct" May 13 12:44:00.687650 kubelet[2624]: I0513 12:44:00.687690 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20ded834-131c-47bb-b954-98c39e1b325d-config-volume\") pod \"coredns-6f6b679f8f-tpx5t\" (UID: \"20ded834-131c-47bb-b954-98c39e1b325d\") " pod="kube-system/coredns-6f6b679f8f-tpx5t" May 13 12:44:00.687854 kubelet[2624]: I0513 12:44:00.687711 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcqmq\" (UniqueName: \"kubernetes.io/projected/20ded834-131c-47bb-b954-98c39e1b325d-kube-api-access-xcqmq\") pod \"coredns-6f6b679f8f-tpx5t\" (UID: \"20ded834-131c-47bb-b954-98c39e1b325d\") " pod="kube-system/coredns-6f6b679f8f-tpx5t" May 13 12:44:00.842474 containerd[1514]: time="2025-05-13T12:44:00.842379730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6c7ct,Uid:19eae7dd-15e5-4c04-8d6d-f930ee45bcee,Namespace:kube-system,Attempt:0,}" May 13 12:44:00.846006 containerd[1514]: time="2025-05-13T12:44:00.845950940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tpx5t,Uid:20ded834-131c-47bb-b954-98c39e1b325d,Namespace:kube-system,Attempt:0,}" May 13 12:44:01.301935 kubelet[2624]: I0513 12:44:01.301840 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bqrs9" podStartSLOduration=6.087206049 podStartE2EDuration="15.301827019s" podCreationTimestamp="2025-05-13 12:43:46 +0000 UTC" firstStartedPulling="2025-05-13 12:43:46.637755628 +0000 UTC m=+6.547968961" lastFinishedPulling="2025-05-13 12:43:55.852376558 +0000 UTC m=+15.762589931" observedRunningTime="2025-05-13 12:44:01.301518339 +0000 UTC m=+21.211731672" watchObservedRunningTime="2025-05-13 12:44:01.301827019 +0000 UTC m=+21.212040352" May 13 12:44:02.530945 systemd-networkd[1429]: cilium_host: Link UP May 13 12:44:02.531070 systemd-networkd[1429]: cilium_net: Link UP May 13 12:44:02.531219 systemd-networkd[1429]: cilium_net: Gained carrier May 13 12:44:02.531359 systemd-networkd[1429]: cilium_host: Gained carrier May 13 12:44:02.611257 systemd-networkd[1429]: cilium_vxlan: Link UP May 13 12:44:02.611450 systemd-networkd[1429]: cilium_vxlan: Gained carrier May 13 12:44:02.920240 kernel: NET: Registered PF_ALG protocol family May 13 12:44:03.160302 systemd-networkd[1429]: cilium_net: Gained IPv6LL May 13 12:44:03.469062 systemd-networkd[1429]: lxc_health: Link UP May 13 12:44:03.469321 systemd-networkd[1429]: lxc_health: Gained carrier May 13 12:44:03.480380 systemd-networkd[1429]: cilium_host: Gained IPv6LL May 13 12:44:03.864572 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL May 13 12:44:03.935237 kernel: eth0: renamed from tmp0ee9c May 13 12:44:03.935299 systemd-networkd[1429]: lxca11a0d2c98e9: Link UP May 13 12:44:03.937398 kernel: eth0: renamed from tmp1d6d9 May 13 12:44:03.937809 systemd-networkd[1429]: lxc7df73e6f04b3: Link UP May 13 12:44:03.941289 systemd-networkd[1429]: lxca11a0d2c98e9: Gained carrier May 13 12:44:03.941471 systemd-networkd[1429]: lxc7df73e6f04b3: Gained carrier May 13 12:44:04.760312 systemd-networkd[1429]: lxc_health: Gained IPv6LL May 13 12:44:05.144317 systemd-networkd[1429]: lxca11a0d2c98e9: Gained IPv6LL May 13 12:44:05.464318 systemd-networkd[1429]: lxc7df73e6f04b3: Gained IPv6LL May 13 12:44:07.448515 containerd[1514]: time="2025-05-13T12:44:07.448258722Z" level=info msg="connecting to shim 0ee9c94bad54a8b4fc52cfab994671328414e0160a0b958200134783f94bd80c" address="unix:///run/containerd/s/eed653569e4cc8839a0b22654c733cd509a533cc993b274b5c616c43512b462a" namespace=k8s.io protocol=ttrpc version=3 May 13 12:44:07.451482 containerd[1514]: time="2025-05-13T12:44:07.451286727Z" level=info msg="connecting to shim 1d6d9c2dc7e858943c01a42c129935079460bd8d132ac795a33d992100507d8a" address="unix:///run/containerd/s/0006771f965b3e675699d1addf57279ae1370e391a44b06a052c19097919a1f9" namespace=k8s.io protocol=ttrpc version=3 May 13 12:44:07.476315 systemd[1]: Started cri-containerd-1d6d9c2dc7e858943c01a42c129935079460bd8d132ac795a33d992100507d8a.scope - libcontainer container 1d6d9c2dc7e858943c01a42c129935079460bd8d132ac795a33d992100507d8a. May 13 12:44:07.479466 systemd[1]: Started cri-containerd-0ee9c94bad54a8b4fc52cfab994671328414e0160a0b958200134783f94bd80c.scope - libcontainer container 0ee9c94bad54a8b4fc52cfab994671328414e0160a0b958200134783f94bd80c. May 13 12:44:07.488721 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:44:07.490405 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 12:44:07.513269 containerd[1514]: time="2025-05-13T12:44:07.513216519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-6c7ct,Uid:19eae7dd-15e5-4c04-8d6d-f930ee45bcee,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ee9c94bad54a8b4fc52cfab994671328414e0160a0b958200134783f94bd80c\"" May 13 12:44:07.514163 containerd[1514]: time="2025-05-13T12:44:07.514109481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-tpx5t,Uid:20ded834-131c-47bb-b954-98c39e1b325d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d6d9c2dc7e858943c01a42c129935079460bd8d132ac795a33d992100507d8a\"" May 13 12:44:07.522909 containerd[1514]: time="2025-05-13T12:44:07.522879697Z" level=info msg="CreateContainer within sandbox \"0ee9c94bad54a8b4fc52cfab994671328414e0160a0b958200134783f94bd80c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:44:07.523860 containerd[1514]: time="2025-05-13T12:44:07.523815138Z" level=info msg="CreateContainer within sandbox \"1d6d9c2dc7e858943c01a42c129935079460bd8d132ac795a33d992100507d8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 12:44:07.536100 containerd[1514]: time="2025-05-13T12:44:07.536061480Z" level=info msg="Container 517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565: CDI devices from CRI Config.CDIDevices: []" May 13 12:44:07.538965 containerd[1514]: time="2025-05-13T12:44:07.538386005Z" level=info msg="Container be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b: CDI devices from CRI Config.CDIDevices: []" May 13 12:44:07.543559 containerd[1514]: time="2025-05-13T12:44:07.543525574Z" level=info msg="CreateContainer within sandbox \"0ee9c94bad54a8b4fc52cfab994671328414e0160a0b958200134783f94bd80c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565\"" May 13 12:44:07.544170 containerd[1514]: time="2025-05-13T12:44:07.544135655Z" level=info msg="StartContainer for \"517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565\"" May 13 12:44:07.544775 containerd[1514]: time="2025-05-13T12:44:07.544731136Z" level=info msg="CreateContainer within sandbox \"1d6d9c2dc7e858943c01a42c129935079460bd8d132ac795a33d992100507d8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b\"" May 13 12:44:07.545117 containerd[1514]: time="2025-05-13T12:44:07.545090017Z" level=info msg="StartContainer for \"be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b\"" May 13 12:44:07.545430 containerd[1514]: time="2025-05-13T12:44:07.545401897Z" level=info msg="connecting to shim 517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565" address="unix:///run/containerd/s/eed653569e4cc8839a0b22654c733cd509a533cc993b274b5c616c43512b462a" protocol=ttrpc version=3 May 13 12:44:07.546067 containerd[1514]: time="2025-05-13T12:44:07.546036098Z" level=info msg="connecting to shim be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b" address="unix:///run/containerd/s/0006771f965b3e675699d1addf57279ae1370e391a44b06a052c19097919a1f9" protocol=ttrpc version=3 May 13 12:44:07.567286 systemd[1]: Started cri-containerd-be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b.scope - libcontainer container be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b. May 13 12:44:07.569999 systemd[1]: Started cri-containerd-517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565.scope - libcontainer container 517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565. May 13 12:44:07.602647 containerd[1514]: time="2025-05-13T12:44:07.602612280Z" level=info msg="StartContainer for \"517c82484da847329f15d595c037bd3379506a307352cfc1716d2842c45da565\" returns successfully" May 13 12:44:07.610613 containerd[1514]: time="2025-05-13T12:44:07.609810133Z" level=info msg="StartContainer for \"be0f56bb0ba079bfdf5ab395a4c7458de4057edda91d317ac70eacb7772b827b\" returns successfully" May 13 12:44:08.307660 kubelet[2624]: I0513 12:44:08.307600 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-tpx5t" podStartSLOduration=22.307584718 podStartE2EDuration="22.307584718s" podCreationTimestamp="2025-05-13 12:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:44:08.305921076 +0000 UTC m=+28.216134409" watchObservedRunningTime="2025-05-13 12:44:08.307584718 +0000 UTC m=+28.217798051" May 13 12:44:08.340728 kubelet[2624]: I0513 12:44:08.340665 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-6c7ct" podStartSLOduration=22.340646534 podStartE2EDuration="22.340646534s" podCreationTimestamp="2025-05-13 12:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:44:08.338594611 +0000 UTC m=+28.248807944" watchObservedRunningTime="2025-05-13 12:44:08.340646534 +0000 UTC m=+28.250859867" May 13 12:44:08.629997 systemd[1]: Started sshd@7-10.0.0.71:22-10.0.0.1:34238.service - OpenSSH per-connection server daemon (10.0.0.1:34238). May 13 12:44:08.685448 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:08.686896 sshd-session[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:08.694221 systemd-logind[1500]: New session 8 of user core. May 13 12:44:08.708386 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 12:44:08.859988 sshd[3966]: Connection closed by 10.0.0.1 port 34238 May 13 12:44:08.859845 sshd-session[3964]: pam_unix(sshd:session): session closed for user core May 13 12:44:08.863623 systemd[1]: sshd@7-10.0.0.71:22-10.0.0.1:34238.service: Deactivated successfully. May 13 12:44:08.866426 systemd[1]: session-8.scope: Deactivated successfully. May 13 12:44:08.867402 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. May 13 12:44:08.868971 systemd-logind[1500]: Removed session 8. May 13 12:44:08.998452 kubelet[2624]: I0513 12:44:08.998340 2624 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 12:44:13.875664 systemd[1]: Started sshd@8-10.0.0.71:22-10.0.0.1:43166.service - OpenSSH per-connection server daemon (10.0.0.1:43166). May 13 12:44:13.925897 sshd[3982]: Accepted publickey for core from 10.0.0.1 port 43166 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:13.927093 sshd-session[3982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:13.931355 systemd-logind[1500]: New session 9 of user core. May 13 12:44:13.946295 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 12:44:14.058600 sshd[3984]: Connection closed by 10.0.0.1 port 43166 May 13 12:44:14.059105 sshd-session[3982]: pam_unix(sshd:session): session closed for user core May 13 12:44:14.062627 systemd[1]: sshd@8-10.0.0.71:22-10.0.0.1:43166.service: Deactivated successfully. May 13 12:44:14.066333 systemd[1]: session-9.scope: Deactivated successfully. May 13 12:44:14.068083 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. May 13 12:44:14.069380 systemd-logind[1500]: Removed session 9. May 13 12:44:19.074505 systemd[1]: Started sshd@9-10.0.0.71:22-10.0.0.1:43168.service - OpenSSH per-connection server daemon (10.0.0.1:43168). May 13 12:44:19.116066 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 43168 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:19.117419 sshd-session[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:19.121918 systemd-logind[1500]: New session 10 of user core. May 13 12:44:19.133308 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 12:44:19.242290 sshd[4004]: Connection closed by 10.0.0.1 port 43168 May 13 12:44:19.243063 sshd-session[4002]: pam_unix(sshd:session): session closed for user core May 13 12:44:19.246780 systemd[1]: sshd@9-10.0.0.71:22-10.0.0.1:43168.service: Deactivated successfully. May 13 12:44:19.249654 systemd[1]: session-10.scope: Deactivated successfully. May 13 12:44:19.251333 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. May 13 12:44:19.252842 systemd-logind[1500]: Removed session 10. May 13 12:44:24.261650 systemd[1]: Started sshd@10-10.0.0.71:22-10.0.0.1:54656.service - OpenSSH per-connection server daemon (10.0.0.1:54656). May 13 12:44:24.310113 sshd[4019]: Accepted publickey for core from 10.0.0.1 port 54656 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:24.311468 sshd-session[4019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:24.315159 systemd-logind[1500]: New session 11 of user core. May 13 12:44:24.325300 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 12:44:24.434240 sshd[4021]: Connection closed by 10.0.0.1 port 54656 May 13 12:44:24.434691 sshd-session[4019]: pam_unix(sshd:session): session closed for user core May 13 12:44:24.454372 systemd[1]: sshd@10-10.0.0.71:22-10.0.0.1:54656.service: Deactivated successfully. May 13 12:44:24.456021 systemd[1]: session-11.scope: Deactivated successfully. May 13 12:44:24.456860 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. May 13 12:44:24.459916 systemd[1]: Started sshd@11-10.0.0.71:22-10.0.0.1:54660.service - OpenSSH per-connection server daemon (10.0.0.1:54660). May 13 12:44:24.460499 systemd-logind[1500]: Removed session 11. May 13 12:44:24.505285 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 54660 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:24.506548 sshd-session[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:24.510778 systemd-logind[1500]: New session 12 of user core. May 13 12:44:24.527303 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 12:44:24.671077 sshd[4037]: Connection closed by 10.0.0.1 port 54660 May 13 12:44:24.671617 sshd-session[4035]: pam_unix(sshd:session): session closed for user core May 13 12:44:24.682325 systemd[1]: sshd@11-10.0.0.71:22-10.0.0.1:54660.service: Deactivated successfully. May 13 12:44:24.683977 systemd[1]: session-12.scope: Deactivated successfully. May 13 12:44:24.685359 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. May 13 12:44:24.690423 systemd[1]: Started sshd@12-10.0.0.71:22-10.0.0.1:54668.service - OpenSSH per-connection server daemon (10.0.0.1:54668). May 13 12:44:24.691038 systemd-logind[1500]: Removed session 12. May 13 12:44:24.745336 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:24.746424 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:24.750876 systemd-logind[1500]: New session 13 of user core. May 13 12:44:24.771369 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 12:44:24.884906 sshd[4051]: Connection closed by 10.0.0.1 port 54668 May 13 12:44:24.885244 sshd-session[4049]: pam_unix(sshd:session): session closed for user core May 13 12:44:24.888450 systemd[1]: sshd@12-10.0.0.71:22-10.0.0.1:54668.service: Deactivated successfully. May 13 12:44:24.889999 systemd[1]: session-13.scope: Deactivated successfully. May 13 12:44:24.890782 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. May 13 12:44:24.892039 systemd-logind[1500]: Removed session 13. May 13 12:44:29.898647 systemd[1]: Started sshd@13-10.0.0.71:22-10.0.0.1:54674.service - OpenSSH per-connection server daemon (10.0.0.1:54674). May 13 12:44:29.961688 sshd[4065]: Accepted publickey for core from 10.0.0.1 port 54674 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:29.963719 sshd-session[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:29.968212 systemd-logind[1500]: New session 14 of user core. May 13 12:44:29.977309 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 12:44:30.087122 sshd[4067]: Connection closed by 10.0.0.1 port 54674 May 13 12:44:30.087455 sshd-session[4065]: pam_unix(sshd:session): session closed for user core May 13 12:44:30.090756 systemd[1]: sshd@13-10.0.0.71:22-10.0.0.1:54674.service: Deactivated successfully. May 13 12:44:30.092848 systemd[1]: session-14.scope: Deactivated successfully. May 13 12:44:30.093600 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. May 13 12:44:30.094657 systemd-logind[1500]: Removed session 14. May 13 12:44:35.103558 systemd[1]: Started sshd@14-10.0.0.71:22-10.0.0.1:39182.service - OpenSSH per-connection server daemon (10.0.0.1:39182). May 13 12:44:35.142135 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 39182 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:35.143373 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:35.146944 systemd-logind[1500]: New session 15 of user core. May 13 12:44:35.162304 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 12:44:35.269627 sshd[4084]: Connection closed by 10.0.0.1 port 39182 May 13 12:44:35.270122 sshd-session[4082]: pam_unix(sshd:session): session closed for user core May 13 12:44:35.283338 systemd[1]: sshd@14-10.0.0.71:22-10.0.0.1:39182.service: Deactivated successfully. May 13 12:44:35.285422 systemd[1]: session-15.scope: Deactivated successfully. May 13 12:44:35.286244 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. May 13 12:44:35.288833 systemd[1]: Started sshd@15-10.0.0.71:22-10.0.0.1:39190.service - OpenSSH per-connection server daemon (10.0.0.1:39190). May 13 12:44:35.289514 systemd-logind[1500]: Removed session 15. May 13 12:44:35.340559 sshd[4097]: Accepted publickey for core from 10.0.0.1 port 39190 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:35.341776 sshd-session[4097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:35.345769 systemd-logind[1500]: New session 16 of user core. May 13 12:44:35.356378 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 12:44:35.544213 sshd[4099]: Connection closed by 10.0.0.1 port 39190 May 13 12:44:35.544874 sshd-session[4097]: pam_unix(sshd:session): session closed for user core May 13 12:44:35.560240 systemd[1]: sshd@15-10.0.0.71:22-10.0.0.1:39190.service: Deactivated successfully. May 13 12:44:35.562261 systemd[1]: session-16.scope: Deactivated successfully. May 13 12:44:35.563088 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. May 13 12:44:35.565931 systemd[1]: Started sshd@16-10.0.0.71:22-10.0.0.1:39202.service - OpenSSH per-connection server daemon (10.0.0.1:39202). May 13 12:44:35.567858 systemd-logind[1500]: Removed session 16. May 13 12:44:35.620619 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 39202 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:35.622930 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:35.627510 systemd-logind[1500]: New session 17 of user core. May 13 12:44:35.638331 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 12:44:36.907164 sshd[4112]: Connection closed by 10.0.0.1 port 39202 May 13 12:44:36.908395 sshd-session[4110]: pam_unix(sshd:session): session closed for user core May 13 12:44:36.916511 systemd[1]: sshd@16-10.0.0.71:22-10.0.0.1:39202.service: Deactivated successfully. May 13 12:44:36.923196 systemd[1]: session-17.scope: Deactivated successfully. May 13 12:44:36.925053 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. May 13 12:44:36.929649 systemd[1]: Started sshd@17-10.0.0.71:22-10.0.0.1:39208.service - OpenSSH per-connection server daemon (10.0.0.1:39208). May 13 12:44:36.933279 systemd-logind[1500]: Removed session 17. May 13 12:44:36.980965 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 39208 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:36.983300 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:36.988197 systemd-logind[1500]: New session 18 of user core. May 13 12:44:36.999315 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 12:44:37.234002 sshd[4133]: Connection closed by 10.0.0.1 port 39208 May 13 12:44:37.235744 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 13 12:44:37.248389 systemd[1]: sshd@17-10.0.0.71:22-10.0.0.1:39208.service: Deactivated successfully. May 13 12:44:37.250402 systemd[1]: session-18.scope: Deactivated successfully. May 13 12:44:37.251319 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. May 13 12:44:37.255179 systemd[1]: Started sshd@18-10.0.0.71:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). May 13 12:44:37.255922 systemd-logind[1500]: Removed session 18. May 13 12:44:37.305607 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:37.306984 sshd-session[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:37.311028 systemd-logind[1500]: New session 19 of user core. May 13 12:44:37.322368 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 12:44:37.434183 sshd[4147]: Connection closed by 10.0.0.1 port 39222 May 13 12:44:37.434725 sshd-session[4145]: pam_unix(sshd:session): session closed for user core May 13 12:44:37.438392 systemd[1]: sshd@18-10.0.0.71:22-10.0.0.1:39222.service: Deactivated successfully. May 13 12:44:37.440387 systemd[1]: session-19.scope: Deactivated successfully. May 13 12:44:37.441287 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. May 13 12:44:37.442417 systemd-logind[1500]: Removed session 19. May 13 12:44:42.450573 systemd[1]: Started sshd@19-10.0.0.71:22-10.0.0.1:39224.service - OpenSSH per-connection server daemon (10.0.0.1:39224). May 13 12:44:42.504705 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 39224 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:42.505914 sshd-session[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:42.509791 systemd-logind[1500]: New session 20 of user core. May 13 12:44:42.521164 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 12:44:42.632184 sshd[4167]: Connection closed by 10.0.0.1 port 39224 May 13 12:44:42.632165 sshd-session[4165]: pam_unix(sshd:session): session closed for user core May 13 12:44:42.635486 systemd[1]: sshd@19-10.0.0.71:22-10.0.0.1:39224.service: Deactivated successfully. May 13 12:44:42.637669 systemd[1]: session-20.scope: Deactivated successfully. May 13 12:44:42.638510 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. May 13 12:44:42.640159 systemd-logind[1500]: Removed session 20. May 13 12:44:47.646490 systemd[1]: Started sshd@20-10.0.0.71:22-10.0.0.1:41622.service - OpenSSH per-connection server daemon (10.0.0.1:41622). May 13 12:44:47.703960 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 41622 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:47.705097 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:47.709194 systemd-logind[1500]: New session 21 of user core. May 13 12:44:47.719300 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 12:44:47.825024 sshd[4184]: Connection closed by 10.0.0.1 port 41622 May 13 12:44:47.825613 sshd-session[4182]: pam_unix(sshd:session): session closed for user core May 13 12:44:47.829205 systemd[1]: sshd@20-10.0.0.71:22-10.0.0.1:41622.service: Deactivated successfully. May 13 12:44:47.831117 systemd[1]: session-21.scope: Deactivated successfully. May 13 12:44:47.831879 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. May 13 12:44:47.833361 systemd-logind[1500]: Removed session 21. May 13 12:44:52.840493 systemd[1]: Started sshd@21-10.0.0.71:22-10.0.0.1:51286.service - OpenSSH per-connection server daemon (10.0.0.1:51286). May 13 12:44:52.887387 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 51286 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:52.888518 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:52.893042 systemd-logind[1500]: New session 22 of user core. May 13 12:44:52.906336 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 12:44:53.010197 sshd[4200]: Connection closed by 10.0.0.1 port 51286 May 13 12:44:53.010523 sshd-session[4198]: pam_unix(sshd:session): session closed for user core May 13 12:44:53.025263 systemd[1]: sshd@21-10.0.0.71:22-10.0.0.1:51286.service: Deactivated successfully. May 13 12:44:53.027839 systemd[1]: session-22.scope: Deactivated successfully. May 13 12:44:53.030105 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. May 13 12:44:53.031750 systemd[1]: Started sshd@22-10.0.0.71:22-10.0.0.1:51288.service - OpenSSH per-connection server daemon (10.0.0.1:51288). May 13 12:44:53.033272 systemd-logind[1500]: Removed session 22. May 13 12:44:53.077365 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 51288 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:53.078429 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:53.082186 systemd-logind[1500]: New session 23 of user core. May 13 12:44:53.099339 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 12:44:54.839361 containerd[1514]: time="2025-05-13T12:44:54.838028723Z" level=info msg="StopContainer for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" with timeout 30 (s)" May 13 12:44:54.840415 containerd[1514]: time="2025-05-13T12:44:54.840330867Z" level=info msg="Stop container \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" with signal terminated" May 13 12:44:54.856917 systemd[1]: cri-containerd-f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7.scope: Deactivated successfully. May 13 12:44:54.859482 containerd[1514]: time="2025-05-13T12:44:54.859441790Z" level=info msg="received exit event container_id:\"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" id:\"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" pid:3207 exited_at:{seconds:1747140294 nanos:859102706}" May 13 12:44:54.859705 containerd[1514]: time="2025-05-13T12:44:54.859680192Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" id:\"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" pid:3207 exited_at:{seconds:1747140294 nanos:859102706}" May 13 12:44:54.872569 containerd[1514]: time="2025-05-13T12:44:54.872530328Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 12:44:54.878435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7-rootfs.mount: Deactivated successfully. May 13 12:44:54.879923 containerd[1514]: time="2025-05-13T12:44:54.879893686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" id:\"f941a2bf01557973d6d315a336ecda0f8543555a74d5c91b9ea5cb63a9fc81a7\" pid:4242 exited_at:{seconds:1747140294 nanos:879500402}" May 13 12:44:54.883220 containerd[1514]: time="2025-05-13T12:44:54.883191921Z" level=info msg="StopContainer for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" with timeout 2 (s)" May 13 12:44:54.883499 containerd[1514]: time="2025-05-13T12:44:54.883457124Z" level=info msg="Stop container \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" with signal terminated" May 13 12:44:54.889254 systemd-networkd[1429]: lxc_health: Link DOWN May 13 12:44:54.889260 systemd-networkd[1429]: lxc_health: Lost carrier May 13 12:44:54.896419 containerd[1514]: time="2025-05-13T12:44:54.896385741Z" level=info msg="StopContainer for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" returns successfully" May 13 12:44:54.899358 containerd[1514]: time="2025-05-13T12:44:54.899291452Z" level=info msg="StopPodSandbox for \"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\"" May 13 12:44:54.899438 containerd[1514]: time="2025-05-13T12:44:54.899414293Z" level=info msg="Container to stop \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:44:54.905725 systemd[1]: cri-containerd-3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884.scope: Deactivated successfully. May 13 12:44:54.906049 systemd[1]: cri-containerd-3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884.scope: Consumed 6.314s CPU time, 127.6M memory peak, 3.4M read from disk, 14M written to disk. May 13 12:44:54.907915 containerd[1514]: time="2025-05-13T12:44:54.907882463Z" level=info msg="received exit event container_id:\"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" id:\"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" pid:3280 exited_at:{seconds:1747140294 nanos:907665820}" May 13 12:44:54.908008 containerd[1514]: time="2025-05-13T12:44:54.907903743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" id:\"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" pid:3280 exited_at:{seconds:1747140294 nanos:907665820}" May 13 12:44:54.912622 systemd[1]: cri-containerd-0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450.scope: Deactivated successfully. May 13 12:44:54.913696 containerd[1514]: time="2025-05-13T12:44:54.913655004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" id:\"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" pid:2857 exit_status:137 exited_at:{seconds:1747140294 nanos:913380201}" May 13 12:44:54.932558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884-rootfs.mount: Deactivated successfully. May 13 12:44:54.941258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450-rootfs.mount: Deactivated successfully. May 13 12:44:54.947171 containerd[1514]: time="2025-05-13T12:44:54.947115638Z" level=info msg="shim disconnected" id=0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450 namespace=k8s.io May 13 12:44:54.947312 containerd[1514]: time="2025-05-13T12:44:54.947168719Z" level=warning msg="cleaning up after shim disconnected" id=0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450 namespace=k8s.io May 13 12:44:54.947312 containerd[1514]: time="2025-05-13T12:44:54.947200439Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:44:54.949657 containerd[1514]: time="2025-05-13T12:44:54.949567224Z" level=info msg="StopContainer for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" returns successfully" May 13 12:44:54.950566 containerd[1514]: time="2025-05-13T12:44:54.950289392Z" level=info msg="StopPodSandbox for \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\"" May 13 12:44:54.950566 containerd[1514]: time="2025-05-13T12:44:54.950349033Z" level=info msg="Container to stop \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:44:54.950566 containerd[1514]: time="2025-05-13T12:44:54.950360993Z" level=info msg="Container to stop \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:44:54.950566 containerd[1514]: time="2025-05-13T12:44:54.950369193Z" level=info msg="Container to stop \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:44:54.950566 containerd[1514]: time="2025-05-13T12:44:54.950377073Z" level=info msg="Container to stop \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:44:54.950566 containerd[1514]: time="2025-05-13T12:44:54.950384473Z" level=info msg="Container to stop \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 12:44:54.956289 systemd[1]: cri-containerd-27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e.scope: Deactivated successfully. May 13 12:44:54.961665 containerd[1514]: time="2025-05-13T12:44:54.961619472Z" level=info msg="received exit event sandbox_id:\"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" exit_status:137 exited_at:{seconds:1747140294 nanos:913380201}" May 13 12:44:54.961886 containerd[1514]: time="2025-05-13T12:44:54.961632912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" id:\"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" pid:2772 exit_status:137 exited_at:{seconds:1747140294 nanos:960434419}" May 13 12:44:54.961976 containerd[1514]: time="2025-05-13T12:44:54.961952236Z" level=info msg="TearDown network for sandbox \"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" successfully" May 13 12:44:54.961976 containerd[1514]: time="2025-05-13T12:44:54.961971876Z" level=info msg="StopPodSandbox for \"0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450\" returns successfully" May 13 12:44:54.963846 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dadf086d1f77d3bcf5cb29d1832cfaebad2d8ff9f86cbea0c0205e6a031f450-shm.mount: Deactivated successfully. May 13 12:44:54.987676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e-rootfs.mount: Deactivated successfully. May 13 12:44:54.992830 containerd[1514]: time="2025-05-13T12:44:54.992468879Z" level=info msg="received exit event sandbox_id:\"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" exit_status:137 exited_at:{seconds:1747140294 nanos:960434419}" May 13 12:44:54.993025 containerd[1514]: time="2025-05-13T12:44:54.992972604Z" level=info msg="TearDown network for sandbox \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" successfully" May 13 12:44:54.993073 containerd[1514]: time="2025-05-13T12:44:54.993028765Z" level=info msg="StopPodSandbox for \"27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e\" returns successfully" May 13 12:44:54.993166 containerd[1514]: time="2025-05-13T12:44:54.992996084Z" level=info msg="shim disconnected" id=27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e namespace=k8s.io May 13 12:44:54.993218 containerd[1514]: time="2025-05-13T12:44:54.993174126Z" level=warning msg="cleaning up after shim disconnected" id=27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e namespace=k8s.io May 13 12:44:54.993218 containerd[1514]: time="2025-05-13T12:44:54.993214047Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 12:44:55.071478 kubelet[2624]: I0513 12:44:55.071430 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-cilium-config-path\") pod \"4273a4e3-f4b7-4f4c-b408-bf73be0c0850\" (UID: \"4273a4e3-f4b7-4f4c-b408-bf73be0c0850\") " May 13 12:44:55.071478 kubelet[2624]: I0513 12:44:55.071472 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-run\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.071478 kubelet[2624]: I0513 12:44:55.071490 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-etc-cni-netd\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.071878 kubelet[2624]: I0513 12:44:55.071508 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jrft\" (UniqueName: \"kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-kube-api-access-4jrft\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.071878 kubelet[2624]: I0513 12:44:55.071525 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-hostproc\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.071878 kubelet[2624]: I0513 12:44:55.071539 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-lib-modules\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.071878 kubelet[2624]: I0513 12:44:55.071553 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-kernel\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.071878 kubelet[2624]: I0513 12:44:55.071570 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7v4g\" (UniqueName: \"kubernetes.io/projected/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-kube-api-access-d7v4g\") pod \"4273a4e3-f4b7-4f4c-b408-bf73be0c0850\" (UID: \"4273a4e3-f4b7-4f4c-b408-bf73be0c0850\") " May 13 12:44:55.071878 kubelet[2624]: I0513 12:44:55.071586 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be117f66-880a-4e5b-9087-43617c3621e1-clustermesh-secrets\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.075396 kubelet[2624]: I0513 12:44:55.075165 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.075396 kubelet[2624]: I0513 12:44:55.075262 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.075396 kubelet[2624]: I0513 12:44:55.075283 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.078161 kubelet[2624]: I0513 12:44:55.078107 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-hostproc" (OuterVolumeSpecName: "hostproc") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.078228 kubelet[2624]: I0513 12:44:55.078174 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.079910 kubelet[2624]: I0513 12:44:55.079843 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-kube-api-access-d7v4g" (OuterVolumeSpecName: "kube-api-access-d7v4g") pod "4273a4e3-f4b7-4f4c-b408-bf73be0c0850" (UID: "4273a4e3-f4b7-4f4c-b408-bf73be0c0850"). InnerVolumeSpecName "kube-api-access-d7v4g". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:44:55.081233 kubelet[2624]: I0513 12:44:55.081182 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4273a4e3-f4b7-4f4c-b408-bf73be0c0850" (UID: "4273a4e3-f4b7-4f4c-b408-bf73be0c0850"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 12:44:55.081233 kubelet[2624]: I0513 12:44:55.081198 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-kube-api-access-4jrft" (OuterVolumeSpecName: "kube-api-access-4jrft") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "kube-api-access-4jrft". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:44:55.082534 kubelet[2624]: I0513 12:44:55.082494 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be117f66-880a-4e5b-9087-43617c3621e1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 12:44:55.172308 kubelet[2624]: I0513 12:44:55.172280 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-bpf-maps\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.172923 kubelet[2624]: I0513 12:44:55.172452 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be117f66-880a-4e5b-9087-43617c3621e1-cilium-config-path\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.172923 kubelet[2624]: I0513 12:44:55.172478 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-hubble-tls\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.172923 kubelet[2624]: I0513 12:44:55.172494 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-cgroup\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.172923 kubelet[2624]: I0513 12:44:55.172508 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-net\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.172923 kubelet[2624]: I0513 12:44:55.172525 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cni-path\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.172923 kubelet[2624]: I0513 12:44:55.172539 2624 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-xtables-lock\") pod \"be117f66-880a-4e5b-9087-43617c3621e1\" (UID: \"be117f66-880a-4e5b-9087-43617c3621e1\") " May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172571 2624 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172580 2624 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172587 2624 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172596 2624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4jrft\" (UniqueName: \"kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-kube-api-access-4jrft\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172604 2624 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172612 2624 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172620 2624 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173170 kubelet[2624]: I0513 12:44:55.172627 2624 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-d7v4g\" (UniqueName: \"kubernetes.io/projected/4273a4e3-f4b7-4f4c-b408-bf73be0c0850-kube-api-access-d7v4g\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173337 kubelet[2624]: I0513 12:44:55.172635 2624 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be117f66-880a-4e5b-9087-43617c3621e1-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.173337 kubelet[2624]: I0513 12:44:55.172344 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.173337 kubelet[2624]: I0513 12:44:55.172661 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.173337 kubelet[2624]: I0513 12:44:55.172915 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.173337 kubelet[2624]: I0513 12:44:55.172934 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.173436 kubelet[2624]: I0513 12:44:55.172951 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cni-path" (OuterVolumeSpecName: "cni-path") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 12:44:55.174241 kubelet[2624]: I0513 12:44:55.174203 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be117f66-880a-4e5b-9087-43617c3621e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 12:44:55.174834 kubelet[2624]: I0513 12:44:55.174797 2624 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be117f66-880a-4e5b-9087-43617c3621e1" (UID: "be117f66-880a-4e5b-9087-43617c3621e1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 12:44:55.188303 kubelet[2624]: E0513 12:44:55.188277 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:44:55.233815 kubelet[2624]: E0513 12:44:55.233773 2624 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 12:44:55.273224 kubelet[2624]: I0513 12:44:55.273192 2624 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.273224 kubelet[2624]: I0513 12:44:55.273218 2624 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.273224 kubelet[2624]: I0513 12:44:55.273228 2624 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be117f66-880a-4e5b-9087-43617c3621e1-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.273337 kubelet[2624]: I0513 12:44:55.273272 2624 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.273337 kubelet[2624]: I0513 12:44:55.273313 2624 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.273337 kubelet[2624]: I0513 12:44:55.273326 2624 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be117f66-880a-4e5b-9087-43617c3621e1-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.273337 kubelet[2624]: I0513 12:44:55.273334 2624 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be117f66-880a-4e5b-9087-43617c3621e1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 12:44:55.395075 kubelet[2624]: I0513 12:44:55.395050 2624 scope.go:117] "RemoveContainer" containerID="f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7" May 13 12:44:55.398032 containerd[1514]: time="2025-05-13T12:44:55.397497376Z" level=info msg="RemoveContainer for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\"" May 13 12:44:55.399340 systemd[1]: Removed slice kubepods-besteffort-pod4273a4e3_f4b7_4f4c_b408_bf73be0c0850.slice - libcontainer container kubepods-besteffort-pod4273a4e3_f4b7_4f4c_b408_bf73be0c0850.slice. May 13 12:44:55.406711 systemd[1]: Removed slice kubepods-burstable-podbe117f66_880a_4e5b_9087_43617c3621e1.slice - libcontainer container kubepods-burstable-podbe117f66_880a_4e5b_9087_43617c3621e1.slice. May 13 12:44:55.406921 systemd[1]: kubepods-burstable-podbe117f66_880a_4e5b_9087_43617c3621e1.slice: Consumed 6.452s CPU time, 127.9M memory peak, 5.1M read from disk, 14.1M written to disk. May 13 12:44:55.417671 containerd[1514]: time="2025-05-13T12:44:55.417591183Z" level=info msg="RemoveContainer for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" returns successfully" May 13 12:44:55.418443 kubelet[2624]: I0513 12:44:55.417873 2624 scope.go:117] "RemoveContainer" containerID="f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7" May 13 12:44:55.419395 containerd[1514]: time="2025-05-13T12:44:55.419349801Z" level=error msg="ContainerStatus for \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\": not found" May 13 12:44:55.432265 kubelet[2624]: E0513 12:44:55.432163 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\": not found" containerID="f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7" May 13 12:44:55.432337 kubelet[2624]: I0513 12:44:55.432217 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7"} err="failed to get container status \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f6e5f1bde7ecf46b7a349d3cf0c40d76abb54af6273d8682d7a4738cfae2a5c7\": not found" May 13 12:44:55.432337 kubelet[2624]: I0513 12:44:55.432295 2624 scope.go:117] "RemoveContainer" containerID="3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884" May 13 12:44:55.434494 containerd[1514]: time="2025-05-13T12:44:55.434459157Z" level=info msg="RemoveContainer for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\"" May 13 12:44:55.439011 containerd[1514]: time="2025-05-13T12:44:55.438964164Z" level=info msg="RemoveContainer for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" returns successfully" May 13 12:44:55.439197 kubelet[2624]: I0513 12:44:55.439171 2624 scope.go:117] "RemoveContainer" containerID="dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c" May 13 12:44:55.440749 containerd[1514]: time="2025-05-13T12:44:55.440723062Z" level=info msg="RemoveContainer for \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\"" May 13 12:44:55.444180 containerd[1514]: time="2025-05-13T12:44:55.444123977Z" level=info msg="RemoveContainer for \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" returns successfully" May 13 12:44:55.444359 kubelet[2624]: I0513 12:44:55.444333 2624 scope.go:117] "RemoveContainer" containerID="4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a" May 13 12:44:55.446426 containerd[1514]: time="2025-05-13T12:44:55.446399840Z" level=info msg="RemoveContainer for \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\"" May 13 12:44:55.449629 containerd[1514]: time="2025-05-13T12:44:55.449593993Z" level=info msg="RemoveContainer for \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" returns successfully" May 13 12:44:55.449790 kubelet[2624]: I0513 12:44:55.449768 2624 scope.go:117] "RemoveContainer" containerID="cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047" May 13 12:44:55.451386 containerd[1514]: time="2025-05-13T12:44:55.451362131Z" level=info msg="RemoveContainer for \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\"" May 13 12:44:55.453971 containerd[1514]: time="2025-05-13T12:44:55.453933358Z" level=info msg="RemoveContainer for \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" returns successfully" May 13 12:44:55.454227 kubelet[2624]: I0513 12:44:55.454114 2624 scope.go:117] "RemoveContainer" containerID="0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd" May 13 12:44:55.455510 containerd[1514]: time="2025-05-13T12:44:55.455465454Z" level=info msg="RemoveContainer for \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\"" May 13 12:44:55.457776 containerd[1514]: time="2025-05-13T12:44:55.457740157Z" level=info msg="RemoveContainer for \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" returns successfully" May 13 12:44:55.457990 kubelet[2624]: I0513 12:44:55.457891 2624 scope.go:117] "RemoveContainer" containerID="3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884" May 13 12:44:55.458134 containerd[1514]: time="2025-05-13T12:44:55.458108121Z" level=error msg="ContainerStatus for \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\": not found" May 13 12:44:55.458309 kubelet[2624]: E0513 12:44:55.458288 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\": not found" containerID="3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884" May 13 12:44:55.458398 kubelet[2624]: I0513 12:44:55.458376 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884"} err="failed to get container status \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b2ce3f046961b3466d530449dd3c3c395c96f7343ea1c64042e155e90d88884\": not found" May 13 12:44:55.458461 kubelet[2624]: I0513 12:44:55.458451 2624 scope.go:117] "RemoveContainer" containerID="dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c" May 13 12:44:55.458691 containerd[1514]: time="2025-05-13T12:44:55.458657287Z" level=error msg="ContainerStatus for \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\": not found" May 13 12:44:55.458774 kubelet[2624]: E0513 12:44:55.458749 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\": not found" containerID="dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c" May 13 12:44:55.458851 kubelet[2624]: I0513 12:44:55.458778 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c"} err="failed to get container status \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc5cd10aa1518566673d94c4da896fc902a5d69f793da23dc86ed9ef45ce617c\": not found" May 13 12:44:55.458851 kubelet[2624]: I0513 12:44:55.458797 2624 scope.go:117] "RemoveContainer" containerID="4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a" May 13 12:44:55.459124 containerd[1514]: time="2025-05-13T12:44:55.458919049Z" level=error msg="ContainerStatus for \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\": not found" May 13 12:44:55.459190 kubelet[2624]: E0513 12:44:55.459022 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\": not found" containerID="4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a" May 13 12:44:55.459190 kubelet[2624]: I0513 12:44:55.459044 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a"} err="failed to get container status \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4bc9d061455ac696d2297d1d97451811c90f4ae0b15c8559041a1d525b22c20a\": not found" May 13 12:44:55.459190 kubelet[2624]: I0513 12:44:55.459057 2624 scope.go:117] "RemoveContainer" containerID="cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047" May 13 12:44:55.459330 containerd[1514]: time="2025-05-13T12:44:55.459295013Z" level=error msg="ContainerStatus for \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\": not found" May 13 12:44:55.459428 kubelet[2624]: E0513 12:44:55.459410 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\": not found" containerID="cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047" May 13 12:44:55.459465 kubelet[2624]: I0513 12:44:55.459431 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047"} err="failed to get container status \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\": rpc error: code = NotFound desc = an error occurred when try to find container \"cefde2a618295d1d467cb995bcdd60c286b2cbe7d251182f1465f9b9ea018047\": not found" May 13 12:44:55.459465 kubelet[2624]: I0513 12:44:55.459444 2624 scope.go:117] "RemoveContainer" containerID="0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd" May 13 12:44:55.459637 containerd[1514]: time="2025-05-13T12:44:55.459607056Z" level=error msg="ContainerStatus for \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\": not found" May 13 12:44:55.459773 kubelet[2624]: E0513 12:44:55.459726 2624 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\": not found" containerID="0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd" May 13 12:44:55.459773 kubelet[2624]: I0513 12:44:55.459750 2624 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd"} err="failed to get container status \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0840ebb953e653b0f2d42ddc812140bb3a79028a56efd48ecef88f325922e5dd\": not found" May 13 12:44:55.878466 systemd[1]: var-lib-kubelet-pods-4273a4e3\x2df4b7\x2d4f4c\x2db408\x2dbf73be0c0850-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7v4g.mount: Deactivated successfully. May 13 12:44:55.878566 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-27ebcbe9cc3ca0331ce60edc4ffdd6234a31339ce322a14a33a610eae248213e-shm.mount: Deactivated successfully. May 13 12:44:55.878616 systemd[1]: var-lib-kubelet-pods-be117f66\x2d880a\x2d4e5b\x2d9087\x2d43617c3621e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4jrft.mount: Deactivated successfully. May 13 12:44:55.878682 systemd[1]: var-lib-kubelet-pods-be117f66\x2d880a\x2d4e5b\x2d9087\x2d43617c3621e1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 12:44:55.878728 systemd[1]: var-lib-kubelet-pods-be117f66\x2d880a\x2d4e5b\x2d9087\x2d43617c3621e1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 12:44:56.190964 kubelet[2624]: I0513 12:44:56.190835 2624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4273a4e3-f4b7-4f4c-b408-bf73be0c0850" path="/var/lib/kubelet/pods/4273a4e3-f4b7-4f4c-b408-bf73be0c0850/volumes" May 13 12:44:56.191824 kubelet[2624]: I0513 12:44:56.191367 2624 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be117f66-880a-4e5b-9087-43617c3621e1" path="/var/lib/kubelet/pods/be117f66-880a-4e5b-9087-43617c3621e1/volumes" May 13 12:44:56.795575 sshd[4215]: Connection closed by 10.0.0.1 port 51288 May 13 12:44:56.796242 sshd-session[4213]: pam_unix(sshd:session): session closed for user core May 13 12:44:56.806252 systemd[1]: sshd@22-10.0.0.71:22-10.0.0.1:51288.service: Deactivated successfully. May 13 12:44:56.807871 systemd[1]: session-23.scope: Deactivated successfully. May 13 12:44:56.808104 systemd[1]: session-23.scope: Consumed 1.079s CPU time, 24M memory peak. May 13 12:44:56.808624 systemd-logind[1500]: Session 23 logged out. Waiting for processes to exit. May 13 12:44:56.811538 systemd[1]: Started sshd@23-10.0.0.71:22-10.0.0.1:51304.service - OpenSSH per-connection server daemon (10.0.0.1:51304). May 13 12:44:56.812083 systemd-logind[1500]: Removed session 23. May 13 12:44:56.863428 sshd[4375]: Accepted publickey for core from 10.0.0.1 port 51304 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:56.864650 sshd-session[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:56.869397 systemd-logind[1500]: New session 24 of user core. May 13 12:44:56.873772 containerd[1514]: time="2025-05-13T12:44:56.873729790Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1747140294 nanos:913380201}" May 13 12:44:56.876333 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 12:44:58.186483 sshd[4377]: Connection closed by 10.0.0.1 port 51304 May 13 12:44:58.187400 sshd-session[4375]: pam_unix(sshd:session): session closed for user core May 13 12:44:58.196872 systemd[1]: sshd@23-10.0.0.71:22-10.0.0.1:51304.service: Deactivated successfully. May 13 12:44:58.198949 systemd[1]: session-24.scope: Deactivated successfully. May 13 12:44:58.199119 systemd[1]: session-24.scope: Consumed 1.239s CPU time, 26.1M memory peak. May 13 12:44:58.200782 systemd-logind[1500]: Session 24 logged out. Waiting for processes to exit. May 13 12:44:58.206415 systemd[1]: Started sshd@24-10.0.0.71:22-10.0.0.1:51320.service - OpenSSH per-connection server daemon (10.0.0.1:51320). May 13 12:44:58.207944 kubelet[2624]: E0513 12:44:58.207820 2624 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be117f66-880a-4e5b-9087-43617c3621e1" containerName="mount-cgroup" May 13 12:44:58.207944 kubelet[2624]: E0513 12:44:58.207850 2624 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4273a4e3-f4b7-4f4c-b408-bf73be0c0850" containerName="cilium-operator" May 13 12:44:58.207944 kubelet[2624]: E0513 12:44:58.207860 2624 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be117f66-880a-4e5b-9087-43617c3621e1" containerName="clean-cilium-state" May 13 12:44:58.207944 kubelet[2624]: E0513 12:44:58.207866 2624 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be117f66-880a-4e5b-9087-43617c3621e1" containerName="apply-sysctl-overwrites" May 13 12:44:58.207944 kubelet[2624]: E0513 12:44:58.207871 2624 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be117f66-880a-4e5b-9087-43617c3621e1" containerName="mount-bpf-fs" May 13 12:44:58.207944 kubelet[2624]: E0513 12:44:58.207877 2624 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be117f66-880a-4e5b-9087-43617c3621e1" containerName="cilium-agent" May 13 12:44:58.207944 kubelet[2624]: I0513 12:44:58.207898 2624 memory_manager.go:354] "RemoveStaleState removing state" podUID="be117f66-880a-4e5b-9087-43617c3621e1" containerName="cilium-agent" May 13 12:44:58.207944 kubelet[2624]: I0513 12:44:58.207904 2624 memory_manager.go:354] "RemoveStaleState removing state" podUID="4273a4e3-f4b7-4f4c-b408-bf73be0c0850" containerName="cilium-operator" May 13 12:44:58.211061 systemd-logind[1500]: Removed session 24. May 13 12:44:58.226572 systemd[1]: Created slice kubepods-burstable-podd1d53b67_03de_454a_a0c9_5757d1fce4ab.slice - libcontainer container kubepods-burstable-podd1d53b67_03de_454a_a0c9_5757d1fce4ab.slice. May 13 12:44:58.260641 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 51320 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:58.262343 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:58.269203 systemd-logind[1500]: New session 25 of user core. May 13 12:44:58.278958 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 12:44:58.330394 sshd[4391]: Connection closed by 10.0.0.1 port 51320 May 13 12:44:58.330933 sshd-session[4389]: pam_unix(sshd:session): session closed for user core May 13 12:44:58.352418 systemd[1]: sshd@24-10.0.0.71:22-10.0.0.1:51320.service: Deactivated successfully. May 13 12:44:58.354056 systemd[1]: session-25.scope: Deactivated successfully. May 13 12:44:58.355206 systemd-logind[1500]: Session 25 logged out. Waiting for processes to exit. May 13 12:44:58.357004 systemd[1]: Started sshd@25-10.0.0.71:22-10.0.0.1:51326.service - OpenSSH per-connection server daemon (10.0.0.1:51326). May 13 12:44:58.357994 systemd-logind[1500]: Removed session 25. May 13 12:44:58.388802 kubelet[2624]: I0513 12:44:58.388733 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-xtables-lock\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388802 kubelet[2624]: I0513 12:44:58.388771 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-host-proc-sys-kernel\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388802 kubelet[2624]: I0513 12:44:58.388794 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d1d53b67-03de-454a-a0c9-5757d1fce4ab-clustermesh-secrets\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388939 kubelet[2624]: I0513 12:44:58.388813 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d1d53b67-03de-454a-a0c9-5757d1fce4ab-cilium-config-path\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388939 kubelet[2624]: I0513 12:44:58.388829 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-etc-cni-netd\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388939 kubelet[2624]: I0513 12:44:58.388848 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-cilium-cgroup\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388939 kubelet[2624]: I0513 12:44:58.388864 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-cni-path\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388939 kubelet[2624]: I0513 12:44:58.388884 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d1d53b67-03de-454a-a0c9-5757d1fce4ab-cilium-ipsec-secrets\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.388939 kubelet[2624]: I0513 12:44:58.388925 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-bpf-maps\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.389063 kubelet[2624]: I0513 12:44:58.388948 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-hostproc\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.389063 kubelet[2624]: I0513 12:44:58.388964 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-host-proc-sys-net\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.389063 kubelet[2624]: I0513 12:44:58.388995 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-cilium-run\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.389063 kubelet[2624]: I0513 12:44:58.389016 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d1d53b67-03de-454a-a0c9-5757d1fce4ab-hubble-tls\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.389063 kubelet[2624]: I0513 12:44:58.389032 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99kn2\" (UniqueName: \"kubernetes.io/projected/d1d53b67-03de-454a-a0c9-5757d1fce4ab-kube-api-access-99kn2\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.389197 kubelet[2624]: I0513 12:44:58.389084 2624 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1d53b67-03de-454a-a0c9-5757d1fce4ab-lib-modules\") pod \"cilium-cd9xb\" (UID: \"d1d53b67-03de-454a-a0c9-5757d1fce4ab\") " pod="kube-system/cilium-cd9xb" May 13 12:44:58.412911 sshd[4398]: Accepted publickey for core from 10.0.0.1 port 51326 ssh2: RSA SHA256:HV7SwMkgpUcGbG5PTBCNGAhaEvexdMAt2yN/TIbGAFk May 13 12:44:58.414910 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 12:44:58.419326 systemd-logind[1500]: New session 26 of user core. May 13 12:44:58.429295 systemd[1]: Started session-26.scope - Session 26 of User core. May 13 12:44:58.531581 kubelet[2624]: E0513 12:44:58.531466 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:44:58.531984 containerd[1514]: time="2025-05-13T12:44:58.531942273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd9xb,Uid:d1d53b67-03de-454a-a0c9-5757d1fce4ab,Namespace:kube-system,Attempt:0,}" May 13 12:44:58.557750 containerd[1514]: time="2025-05-13T12:44:58.557649637Z" level=info msg="connecting to shim 883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e" address="unix:///run/containerd/s/72605b3e2f1307c3455f948a09384afbd16fea23e46f13d585476cf085b02ea8" namespace=k8s.io protocol=ttrpc version=3 May 13 12:44:58.578318 systemd[1]: Started cri-containerd-883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e.scope - libcontainer container 883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e. May 13 12:44:58.602441 containerd[1514]: time="2025-05-13T12:44:58.602400222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cd9xb,Uid:d1d53b67-03de-454a-a0c9-5757d1fce4ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\"" May 13 12:44:58.603489 kubelet[2624]: E0513 12:44:58.603465 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:44:58.610524 containerd[1514]: time="2025-05-13T12:44:58.610445419Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 12:44:58.618973 containerd[1514]: time="2025-05-13T12:44:58.618928379Z" level=info msg="Container 067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a: CDI devices from CRI Config.CDIDevices: []" May 13 12:44:58.624701 containerd[1514]: time="2025-05-13T12:44:58.624644514Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\"" May 13 12:44:58.625125 containerd[1514]: time="2025-05-13T12:44:58.625104358Z" level=info msg="StartContainer for \"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\"" May 13 12:44:58.626097 containerd[1514]: time="2025-05-13T12:44:58.626054487Z" level=info msg="connecting to shim 067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a" address="unix:///run/containerd/s/72605b3e2f1307c3455f948a09384afbd16fea23e46f13d585476cf085b02ea8" protocol=ttrpc version=3 May 13 12:44:58.650312 systemd[1]: Started cri-containerd-067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a.scope - libcontainer container 067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a. May 13 12:44:58.673391 containerd[1514]: time="2025-05-13T12:44:58.673339416Z" level=info msg="StartContainer for \"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\" returns successfully" May 13 12:44:58.685831 systemd[1]: cri-containerd-067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a.scope: Deactivated successfully. May 13 12:44:58.687206 containerd[1514]: time="2025-05-13T12:44:58.687129787Z" level=info msg="TaskExit event in podsandbox handler container_id:\"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\" id:\"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\" pid:4471 exited_at:{seconds:1747140298 nanos:686752824}" May 13 12:44:58.687273 containerd[1514]: time="2025-05-13T12:44:58.687241028Z" level=info msg="received exit event container_id:\"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\" id:\"067d0bc64053805c84af73f7ae9ec31dbf25d4ac4f74e70658e379fb211b6f7a\" pid:4471 exited_at:{seconds:1747140298 nanos:686752824}" May 13 12:44:59.414155 kubelet[2624]: E0513 12:44:59.414089 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:44:59.418511 containerd[1514]: time="2025-05-13T12:44:59.418468751Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 12:44:59.429896 containerd[1514]: time="2025-05-13T12:44:59.429847056Z" level=info msg="Container 3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4: CDI devices from CRI Config.CDIDevices: []" May 13 12:44:59.436385 containerd[1514]: time="2025-05-13T12:44:59.436345597Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\"" May 13 12:44:59.438379 containerd[1514]: time="2025-05-13T12:44:59.438346735Z" level=info msg="StartContainer for \"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\"" May 13 12:44:59.439430 containerd[1514]: time="2025-05-13T12:44:59.439406705Z" level=info msg="connecting to shim 3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4" address="unix:///run/containerd/s/72605b3e2f1307c3455f948a09384afbd16fea23e46f13d585476cf085b02ea8" protocol=ttrpc version=3 May 13 12:44:59.465372 systemd[1]: Started cri-containerd-3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4.scope - libcontainer container 3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4. May 13 12:44:59.502871 containerd[1514]: time="2025-05-13T12:44:59.502834451Z" level=info msg="StartContainer for \"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\" returns successfully" May 13 12:44:59.508853 systemd[1]: cri-containerd-3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4.scope: Deactivated successfully. May 13 12:44:59.509214 containerd[1514]: time="2025-05-13T12:44:59.509177510Z" level=info msg="received exit event container_id:\"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\" id:\"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\" pid:4516 exited_at:{seconds:1747140299 nanos:508972308}" May 13 12:44:59.509423 containerd[1514]: time="2025-05-13T12:44:59.509393792Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\" id:\"3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4\" pid:4516 exited_at:{seconds:1747140299 nanos:508972308}" May 13 12:44:59.525720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3602b3adb01360f3114f3e7a38879846feb9db430af1a4a9dfed24e9c4c38fc4-rootfs.mount: Deactivated successfully. May 13 12:45:00.234694 kubelet[2624]: E0513 12:45:00.234649 2624 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 12:45:00.420696 kubelet[2624]: E0513 12:45:00.420659 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:00.423169 containerd[1514]: time="2025-05-13T12:45:00.422965619Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 12:45:00.440299 containerd[1514]: time="2025-05-13T12:45:00.440244974Z" level=info msg="Container 94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021: CDI devices from CRI Config.CDIDevices: []" May 13 12:45:00.450957 containerd[1514]: time="2025-05-13T12:45:00.450905950Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\"" May 13 12:45:00.451537 containerd[1514]: time="2025-05-13T12:45:00.451512076Z" level=info msg="StartContainer for \"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\"" May 13 12:45:00.452977 containerd[1514]: time="2025-05-13T12:45:00.452933488Z" level=info msg="connecting to shim 94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021" address="unix:///run/containerd/s/72605b3e2f1307c3455f948a09384afbd16fea23e46f13d585476cf085b02ea8" protocol=ttrpc version=3 May 13 12:45:00.472334 systemd[1]: Started cri-containerd-94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021.scope - libcontainer container 94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021. May 13 12:45:00.529912 systemd[1]: cri-containerd-94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021.scope: Deactivated successfully. May 13 12:45:00.530947 containerd[1514]: time="2025-05-13T12:45:00.530826670Z" level=info msg="StartContainer for \"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\" returns successfully" May 13 12:45:00.532009 containerd[1514]: time="2025-05-13T12:45:00.531970160Z" level=info msg="received exit event container_id:\"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\" id:\"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\" pid:4562 exited_at:{seconds:1747140300 nanos:531636437}" May 13 12:45:00.532256 containerd[1514]: time="2025-05-13T12:45:00.532214162Z" level=info msg="TaskExit event in podsandbox handler container_id:\"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\" id:\"94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021\" pid:4562 exited_at:{seconds:1747140300 nanos:531636437}" May 13 12:45:00.555929 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94020e2e4c4ecf033243386c8a1006b9350636b13721df3354f0028427641021-rootfs.mount: Deactivated successfully. May 13 12:45:01.425408 kubelet[2624]: E0513 12:45:01.425378 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:01.428530 containerd[1514]: time="2025-05-13T12:45:01.428487252Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 12:45:01.437713 containerd[1514]: time="2025-05-13T12:45:01.437075447Z" level=info msg="Container d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098: CDI devices from CRI Config.CDIDevices: []" May 13 12:45:01.446240 containerd[1514]: time="2025-05-13T12:45:01.446134526Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\"" May 13 12:45:01.447417 containerd[1514]: time="2025-05-13T12:45:01.447333937Z" level=info msg="StartContainer for \"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\"" May 13 12:45:01.448669 containerd[1514]: time="2025-05-13T12:45:01.448639988Z" level=info msg="connecting to shim d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098" address="unix:///run/containerd/s/72605b3e2f1307c3455f948a09384afbd16fea23e46f13d585476cf085b02ea8" protocol=ttrpc version=3 May 13 12:45:01.457770 kubelet[2624]: I0513 12:45:01.457616 2624 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T12:45:01Z","lastTransitionTime":"2025-05-13T12:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 12:45:01.477319 systemd[1]: Started cri-containerd-d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098.scope - libcontainer container d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098. May 13 12:45:01.499704 systemd[1]: cri-containerd-d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098.scope: Deactivated successfully. May 13 12:45:01.505544 containerd[1514]: time="2025-05-13T12:45:01.505402726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\" id:\"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\" pid:4601 exited_at:{seconds:1747140301 nanos:505118244}" May 13 12:45:01.505544 containerd[1514]: time="2025-05-13T12:45:01.505418966Z" level=info msg="received exit event container_id:\"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\" id:\"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\" pid:4601 exited_at:{seconds:1747140301 nanos:505118244}" May 13 12:45:01.506727 containerd[1514]: time="2025-05-13T12:45:01.506486336Z" level=info msg="StartContainer for \"d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098\" returns successfully" May 13 12:45:01.522395 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d94638c4412ee00fd9a75cc544a1edf6604206ad86fe96d04c3c04dd0c6a9098-rootfs.mount: Deactivated successfully. May 13 12:45:02.431059 kubelet[2624]: E0513 12:45:02.430938 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:02.433472 containerd[1514]: time="2025-05-13T12:45:02.432982960Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 12:45:02.448444 containerd[1514]: time="2025-05-13T12:45:02.448401972Z" level=info msg="Container 2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a: CDI devices from CRI Config.CDIDevices: []" May 13 12:45:02.457650 containerd[1514]: time="2025-05-13T12:45:02.457608450Z" level=info msg="CreateContainer within sandbox \"883a0e18891d808f1479468cc19e1683cfa6e34318f552a4b723fd466511f98e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\"" May 13 12:45:02.458137 containerd[1514]: time="2025-05-13T12:45:02.458106774Z" level=info msg="StartContainer for \"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\"" May 13 12:45:02.459040 containerd[1514]: time="2025-05-13T12:45:02.459015862Z" level=info msg="connecting to shim 2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a" address="unix:///run/containerd/s/72605b3e2f1307c3455f948a09384afbd16fea23e46f13d585476cf085b02ea8" protocol=ttrpc version=3 May 13 12:45:02.480357 systemd[1]: Started cri-containerd-2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a.scope - libcontainer container 2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a. May 13 12:45:02.509993 containerd[1514]: time="2025-05-13T12:45:02.509944537Z" level=info msg="StartContainer for \"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\" returns successfully" May 13 12:45:02.567638 containerd[1514]: time="2025-05-13T12:45:02.567599509Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\" id:\"d9f1d50e035da35066c4ce36c694b05c32de10bd3490feb44abbd74e918f42b3\" pid:4670 exited_at:{seconds:1747140302 nanos:567055505}" May 13 12:45:02.787162 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 12:45:03.437167 kubelet[2624]: E0513 12:45:03.437030 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:03.452043 kubelet[2624]: I0513 12:45:03.451900 2624 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cd9xb" podStartSLOduration=5.45188512 podStartE2EDuration="5.45188512s" podCreationTimestamp="2025-05-13 12:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 12:45:03.450228866 +0000 UTC m=+83.360442199" watchObservedRunningTime="2025-05-13 12:45:03.45188512 +0000 UTC m=+83.362098453" May 13 12:45:04.534084 kubelet[2624]: E0513 12:45:04.533930 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:04.890329 containerd[1514]: time="2025-05-13T12:45:04.890287251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\" id:\"4267d347ddda644eb3039cd393833f930293815df2f72400bdb15fa02ec2bff3\" pid:4980 exit_status:1 exited_at:{seconds:1747140304 nanos:889870007}" May 13 12:45:05.635597 systemd-networkd[1429]: lxc_health: Link UP May 13 12:45:05.635824 systemd-networkd[1429]: lxc_health: Gained carrier May 13 12:45:06.534044 kubelet[2624]: E0513 12:45:06.533983 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:07.013159 containerd[1514]: time="2025-05-13T12:45:07.013114182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\" id:\"a0123caae187ffa765a25c2e005f840fb90a413a0e7f3f7f611a6485dacf5f1f\" pid:5212 exited_at:{seconds:1747140307 nanos:12782220}" May 13 12:45:07.032390 systemd-networkd[1429]: lxc_health: Gained IPv6LL May 13 12:45:07.445268 kubelet[2624]: E0513 12:45:07.445230 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:08.450224 kubelet[2624]: E0513 12:45:08.450176 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:09.135576 containerd[1514]: time="2025-05-13T12:45:09.135511651Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\" id:\"d2ee67420d6baaa116ab95a67bf154be057ad6689ab49fffb699b722e6dba2f1\" pid:5246 exited_at:{seconds:1747140309 nanos:134750125}" May 13 12:45:10.188677 kubelet[2624]: E0513 12:45:10.188640 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 12:45:11.226693 containerd[1514]: time="2025-05-13T12:45:11.226648247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2f067932b822d97c3db120b5a2ad85242a7ed6d59437f89cd93e6aaa07cf4f7a\" id:\"a53abaf3eec6710aa2c36e9122d5ddb834445497818a5b391d304f46d678cb8a\" pid:5270 exited_at:{seconds:1747140311 nanos:225787081}" May 13 12:45:11.238205 sshd[4400]: Connection closed by 10.0.0.1 port 51326 May 13 12:45:11.240462 sshd-session[4398]: pam_unix(sshd:session): session closed for user core May 13 12:45:11.247059 systemd[1]: sshd@25-10.0.0.71:22-10.0.0.1:51326.service: Deactivated successfully. May 13 12:45:11.248743 systemd[1]: session-26.scope: Deactivated successfully. May 13 12:45:11.249447 systemd-logind[1500]: Session 26 logged out. Waiting for processes to exit. May 13 12:45:11.250809 systemd-logind[1500]: Removed session 26. May 13 12:45:12.189162 kubelet[2624]: E0513 12:45:12.188976 2624 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"