May 12 13:37:17.899254 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 12 13:37:17.899275 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon May 12 12:12:07 -00 2025 May 12 13:37:17.899284 kernel: KASLR enabled May 12 13:37:17.899290 kernel: efi: EFI v2.7 by EDK II May 12 13:37:17.899295 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 12 13:37:17.899300 kernel: random: crng init done May 12 13:37:17.899307 kernel: secureboot: Secure boot disabled May 12 13:37:17.899312 kernel: ACPI: Early table checksum verification disabled May 12 13:37:17.899318 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 12 13:37:17.899325 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 12 13:37:17.899330 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899336 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899353 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899359 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899366 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899374 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899380 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899386 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899392 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 12 13:37:17.899398 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 12 13:37:17.899403 kernel: NUMA: Failed to initialise from firmware May 12 13:37:17.899410 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 12 13:37:17.899415 kernel: NUMA: NODE_DATA [mem 0xdc956e00-0xdc95dfff] May 12 13:37:17.899421 kernel: Zone ranges: May 12 13:37:17.899427 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 12 13:37:17.899435 kernel: DMA32 empty May 12 13:37:17.899440 kernel: Normal empty May 12 13:37:17.899446 kernel: Device empty May 12 13:37:17.899452 kernel: Movable zone start for each node May 12 13:37:17.899457 kernel: Early memory node ranges May 12 13:37:17.899463 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 12 13:37:17.899469 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 12 13:37:17.899475 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 12 13:37:17.899481 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 12 13:37:17.899487 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 12 13:37:17.899492 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 12 13:37:17.899498 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 12 13:37:17.899504 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 12 13:37:17.899511 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 12 13:37:17.899517 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 12 13:37:17.899525 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 12 13:37:17.899532 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 12 13:37:17.899538 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 12 13:37:17.899546 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 12 13:37:17.899552 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 12 13:37:17.899559 kernel: psci: probing for conduit method from ACPI. May 12 13:37:17.899565 kernel: psci: PSCIv1.1 detected in firmware. May 12 13:37:17.899571 kernel: psci: Using standard PSCI v0.2 function IDs May 12 13:37:17.899577 kernel: psci: Trusted OS migration not required May 12 13:37:17.899583 kernel: psci: SMC Calling Convention v1.1 May 12 13:37:17.899590 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 12 13:37:17.899596 kernel: percpu: Embedded 31 pages/cpu s87016 r8192 d31768 u126976 May 12 13:37:17.899602 kernel: pcpu-alloc: s87016 r8192 d31768 u126976 alloc=31*4096 May 12 13:37:17.899609 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 12 13:37:17.899616 kernel: Detected PIPT I-cache on CPU0 May 12 13:37:17.899623 kernel: CPU features: detected: GIC system register CPU interface May 12 13:37:17.899629 kernel: CPU features: detected: Hardware dirty bit management May 12 13:37:17.899635 kernel: CPU features: detected: Spectre-v4 May 12 13:37:17.899641 kernel: CPU features: detected: Spectre-BHB May 12 13:37:17.899647 kernel: CPU features: kernel page table isolation forced ON by KASLR May 12 13:37:17.899654 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 12 13:37:17.899660 kernel: CPU features: detected: ARM erratum 1418040 May 12 13:37:17.899666 kernel: CPU features: detected: SSBS not fully self-synchronizing May 12 13:37:17.899672 kernel: alternatives: applying boot alternatives May 12 13:37:17.899679 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=653a96bf2da883847e3396e932e31f09e53181a834ffc22434c3993d29b70a16 May 12 13:37:17.899688 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 12 13:37:17.899694 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 12 13:37:17.899701 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 12 13:37:17.899707 kernel: Fallback order for Node 0: 0 May 12 13:37:17.899713 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 12 13:37:17.899719 kernel: Policy zone: DMA May 12 13:37:17.899725 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 12 13:37:17.899731 kernel: software IO TLB: area num 4. May 12 13:37:17.899738 kernel: software IO TLB: mapped [mem 0x00000000d5000000-0x00000000d9000000] (64MB) May 12 13:37:17.899744 kernel: Memory: 2386504K/2572288K available (10432K kernel code, 2202K rwdata, 8168K rodata, 39040K init, 993K bss, 185784K reserved, 0K cma-reserved) May 12 13:37:17.899751 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 12 13:37:17.899758 kernel: rcu: Preemptible hierarchical RCU implementation. May 12 13:37:17.899766 kernel: rcu: RCU event tracing is enabled. May 12 13:37:17.899772 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 12 13:37:17.899779 kernel: Trampoline variant of Tasks RCU enabled. May 12 13:37:17.899785 kernel: Tracing variant of Tasks RCU enabled. May 12 13:37:17.899791 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 12 13:37:17.899798 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 12 13:37:17.899804 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 12 13:37:17.899810 kernel: GICv3: 256 SPIs implemented May 12 13:37:17.899816 kernel: GICv3: 0 Extended SPIs implemented May 12 13:37:17.899822 kernel: Root IRQ handler: gic_handle_irq May 12 13:37:17.899829 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 12 13:37:17.899836 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 12 13:37:17.899842 kernel: ITS [mem 0x08080000-0x0809ffff] May 12 13:37:17.899849 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 12 13:37:17.899856 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 12 13:37:17.899862 kernel: GICv3: using LPI property table @0x00000000400f0000 May 12 13:37:17.899868 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 12 13:37:17.899875 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 12 13:37:17.899881 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:37:17.899887 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 12 13:37:17.899894 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 12 13:37:17.899900 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 12 13:37:17.899908 kernel: arm-pv: using stolen time PV May 12 13:37:17.899914 kernel: Console: colour dummy device 80x25 May 12 13:37:17.899921 kernel: ACPI: Core revision 20230628 May 12 13:37:17.899927 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 12 13:37:17.899934 kernel: pid_max: default: 32768 minimum: 301 May 12 13:37:17.899940 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 12 13:37:17.899947 kernel: landlock: Up and running. May 12 13:37:17.899953 kernel: SELinux: Initializing. May 12 13:37:17.899960 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 13:37:17.899967 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 12 13:37:17.899974 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 12 13:37:17.899981 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:37:17.899988 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 12 13:37:17.899994 kernel: rcu: Hierarchical SRCU implementation. May 12 13:37:17.900000 kernel: rcu: Max phase no-delay instances is 400. May 12 13:37:17.900007 kernel: Platform MSI: ITS@0x8080000 domain created May 12 13:37:17.900013 kernel: PCI/MSI: ITS@0x8080000 domain created May 12 13:37:17.900020 kernel: Remapping and enabling EFI services. May 12 13:37:17.900028 kernel: smp: Bringing up secondary CPUs ... May 12 13:37:17.900060 kernel: Detected PIPT I-cache on CPU1 May 12 13:37:17.900069 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 12 13:37:17.900078 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 12 13:37:17.900085 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:37:17.900092 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 12 13:37:17.900099 kernel: Detected PIPT I-cache on CPU2 May 12 13:37:17.900106 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 12 13:37:17.900113 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 12 13:37:17.900121 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:37:17.900128 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 12 13:37:17.900134 kernel: Detected PIPT I-cache on CPU3 May 12 13:37:17.900141 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 12 13:37:17.900148 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 12 13:37:17.900160 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 12 13:37:17.900166 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 12 13:37:17.900173 kernel: smp: Brought up 1 node, 4 CPUs May 12 13:37:17.900180 kernel: SMP: Total of 4 processors activated. May 12 13:37:17.900188 kernel: CPU features: detected: 32-bit EL0 Support May 12 13:37:17.900195 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 12 13:37:17.900202 kernel: CPU features: detected: Common not Private translations May 12 13:37:17.900209 kernel: CPU features: detected: CRC32 instructions May 12 13:37:17.900216 kernel: CPU features: detected: Enhanced Virtualization Traps May 12 13:37:17.900222 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 12 13:37:17.900229 kernel: CPU features: detected: LSE atomic instructions May 12 13:37:17.900236 kernel: CPU features: detected: Privileged Access Never May 12 13:37:17.900243 kernel: CPU features: detected: RAS Extension Support May 12 13:37:17.900251 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 12 13:37:17.900258 kernel: CPU: All CPU(s) started at EL1 May 12 13:37:17.900265 kernel: alternatives: applying system-wide alternatives May 12 13:37:17.900271 kernel: devtmpfs: initialized May 12 13:37:17.900278 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 12 13:37:17.900285 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 12 13:37:17.900292 kernel: pinctrl core: initialized pinctrl subsystem May 12 13:37:17.900299 kernel: SMBIOS 3.0.0 present. May 12 13:37:17.900305 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 12 13:37:17.900313 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 12 13:37:17.900320 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 12 13:37:17.900327 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 12 13:37:17.900334 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 12 13:37:17.900344 kernel: audit: initializing netlink subsys (disabled) May 12 13:37:17.900351 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 May 12 13:37:17.900358 kernel: thermal_sys: Registered thermal governor 'step_wise' May 12 13:37:17.900365 kernel: cpuidle: using governor menu May 12 13:37:17.900372 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 12 13:37:17.900380 kernel: ASID allocator initialised with 32768 entries May 12 13:37:17.900387 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 12 13:37:17.900393 kernel: Serial: AMBA PL011 UART driver May 12 13:37:17.900400 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 12 13:37:17.900407 kernel: Modules: 0 pages in range for non-PLT usage May 12 13:37:17.900414 kernel: Modules: 509024 pages in range for PLT usage May 12 13:37:17.900421 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 12 13:37:17.900427 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 12 13:37:17.900434 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 12 13:37:17.900442 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 12 13:37:17.900449 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 12 13:37:17.900463 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 12 13:37:17.900470 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 12 13:37:17.900477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 12 13:37:17.900483 kernel: ACPI: Added _OSI(Module Device) May 12 13:37:17.900490 kernel: ACPI: Added _OSI(Processor Device) May 12 13:37:17.900497 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 12 13:37:17.900504 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 12 13:37:17.900512 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 12 13:37:17.900519 kernel: ACPI: Interpreter enabled May 12 13:37:17.900526 kernel: ACPI: Using GIC for interrupt routing May 12 13:37:17.900533 kernel: ACPI: MCFG table detected, 1 entries May 12 13:37:17.900540 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 12 13:37:17.900546 kernel: printk: console [ttyAMA0] enabled May 12 13:37:17.900553 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 12 13:37:17.900684 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 12 13:37:17.900759 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 12 13:37:17.900824 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 12 13:37:17.900884 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 12 13:37:17.900945 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 12 13:37:17.900954 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 12 13:37:17.900961 kernel: PCI host bridge to bus 0000:00 May 12 13:37:17.901028 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 12 13:37:17.901102 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 12 13:37:17.901159 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 12 13:37:17.901215 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 12 13:37:17.901292 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 12 13:37:17.901373 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 12 13:37:17.901439 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 12 13:37:17.901502 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 12 13:37:17.901567 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 12 13:37:17.901628 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 12 13:37:17.901691 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 12 13:37:17.901753 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 12 13:37:17.901809 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 12 13:37:17.901864 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 12 13:37:17.901935 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 12 13:37:17.901947 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 12 13:37:17.901955 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 12 13:37:17.901962 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 12 13:37:17.901969 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 12 13:37:17.901976 kernel: iommu: Default domain type: Translated May 12 13:37:17.901983 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 12 13:37:17.901990 kernel: efivars: Registered efivars operations May 12 13:37:17.901996 kernel: vgaarb: loaded May 12 13:37:17.902003 kernel: clocksource: Switched to clocksource arch_sys_counter May 12 13:37:17.902012 kernel: VFS: Disk quotas dquot_6.6.0 May 12 13:37:17.902019 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 12 13:37:17.902025 kernel: pnp: PnP ACPI init May 12 13:37:17.902107 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 12 13:37:17.902118 kernel: pnp: PnP ACPI: found 1 devices May 12 13:37:17.902125 kernel: NET: Registered PF_INET protocol family May 12 13:37:17.902131 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 12 13:37:17.902138 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 12 13:37:17.902148 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 12 13:37:17.902154 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 12 13:37:17.902161 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 12 13:37:17.902169 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 12 13:37:17.902176 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 13:37:17.902183 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 12 13:37:17.902189 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 12 13:37:17.902196 kernel: PCI: CLS 0 bytes, default 64 May 12 13:37:17.902203 kernel: kvm [1]: HYP mode not available May 12 13:37:17.902211 kernel: Initialise system trusted keyrings May 12 13:37:17.902218 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 12 13:37:17.902225 kernel: Key type asymmetric registered May 12 13:37:17.902231 kernel: Asymmetric key parser 'x509' registered May 12 13:37:17.902238 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 12 13:37:17.902245 kernel: io scheduler mq-deadline registered May 12 13:37:17.902252 kernel: io scheduler kyber registered May 12 13:37:17.902259 kernel: io scheduler bfq registered May 12 13:37:17.902266 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 12 13:37:17.902274 kernel: ACPI: button: Power Button [PWRB] May 12 13:37:17.902281 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 12 13:37:17.902350 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 12 13:37:17.902360 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 12 13:37:17.902366 kernel: thunder_xcv, ver 1.0 May 12 13:37:17.902373 kernel: thunder_bgx, ver 1.0 May 12 13:37:17.902380 kernel: nicpf, ver 1.0 May 12 13:37:17.902387 kernel: nicvf, ver 1.0 May 12 13:37:17.902458 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 12 13:37:17.902521 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-12T13:37:17 UTC (1747057037) May 12 13:37:17.902534 kernel: hid: raw HID events driver (C) Jiri Kosina May 12 13:37:17.902541 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 12 13:37:17.902548 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 12 13:37:17.902555 kernel: watchdog: Hard watchdog permanently disabled May 12 13:37:17.902562 kernel: NET: Registered PF_INET6 protocol family May 12 13:37:17.902568 kernel: Segment Routing with IPv6 May 12 13:37:17.902575 kernel: In-situ OAM (IOAM) with IPv6 May 12 13:37:17.902584 kernel: NET: Registered PF_PACKET protocol family May 12 13:37:17.902591 kernel: Key type dns_resolver registered May 12 13:37:17.902598 kernel: registered taskstats version 1 May 12 13:37:17.902605 kernel: Loading compiled-in X.509 certificates May 12 13:37:17.902612 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 8a19376c4ffd0604cd5425566348a3f0eeb277da' May 12 13:37:17.902619 kernel: Key type .fscrypt registered May 12 13:37:17.902626 kernel: Key type fscrypt-provisioning registered May 12 13:37:17.902633 kernel: ima: No TPM chip found, activating TPM-bypass! May 12 13:37:17.902640 kernel: ima: Allocated hash algorithm: sha1 May 12 13:37:17.902649 kernel: ima: No architecture policies found May 12 13:37:17.902656 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 12 13:37:17.902663 kernel: clk: Disabling unused clocks May 12 13:37:17.902670 kernel: Warning: unable to open an initial console. May 12 13:37:17.902677 kernel: Freeing unused kernel memory: 39040K May 12 13:37:17.902690 kernel: Run /init as init process May 12 13:37:17.902697 kernel: with arguments: May 12 13:37:17.902704 kernel: /init May 12 13:37:17.902711 kernel: with environment: May 12 13:37:17.902720 kernel: HOME=/ May 12 13:37:17.902726 kernel: TERM=linux May 12 13:37:17.902733 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 12 13:37:17.902741 systemd[1]: Successfully made /usr/ read-only. May 12 13:37:17.902751 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 13:37:17.902759 systemd[1]: Detected virtualization kvm. May 12 13:37:17.902766 systemd[1]: Detected architecture arm64. May 12 13:37:17.902774 systemd[1]: Running in initrd. May 12 13:37:17.902782 systemd[1]: No hostname configured, using default hostname. May 12 13:37:17.902789 systemd[1]: Hostname set to . May 12 13:37:17.902797 systemd[1]: Initializing machine ID from VM UUID. May 12 13:37:17.902804 systemd[1]: Queued start job for default target initrd.target. May 12 13:37:17.902812 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:37:17.902820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:37:17.902828 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 12 13:37:17.902837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 13:37:17.902845 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 12 13:37:17.902853 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 12 13:37:17.902861 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 12 13:37:17.902869 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 12 13:37:17.902876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:37:17.902884 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 13:37:17.902892 systemd[1]: Reached target paths.target - Path Units. May 12 13:37:17.902900 systemd[1]: Reached target slices.target - Slice Units. May 12 13:37:17.902908 systemd[1]: Reached target swap.target - Swaps. May 12 13:37:17.902915 systemd[1]: Reached target timers.target - Timer Units. May 12 13:37:17.902923 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 12 13:37:17.902931 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 13:37:17.902939 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 12 13:37:17.902947 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 12 13:37:17.902954 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 13:37:17.902963 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 13:37:17.902970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:37:17.902977 systemd[1]: Reached target sockets.target - Socket Units. May 12 13:37:17.902985 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 12 13:37:17.902992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 13:37:17.902999 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 12 13:37:17.903008 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 12 13:37:17.903016 systemd[1]: Starting systemd-fsck-usr.service... May 12 13:37:17.903024 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 13:37:17.903032 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 13:37:17.903046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:37:17.903054 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:37:17.903063 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 12 13:37:17.903072 systemd[1]: Finished systemd-fsck-usr.service. May 12 13:37:17.903079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 12 13:37:17.903102 systemd-journald[239]: Collecting audit messages is disabled. May 12 13:37:17.903122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:37:17.903130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 12 13:37:17.903138 systemd-journald[239]: Journal started May 12 13:37:17.903156 systemd-journald[239]: Runtime Journal (/run/log/journal/30e5961f022f4a2184fcb3fd49c736fd) is 5.9M, max 47.3M, 41.4M free. May 12 13:37:17.885576 systemd-modules-load[240]: Inserted module 'overlay' May 12 13:37:17.906056 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 12 13:37:17.908863 systemd[1]: Started systemd-journald.service - Journal Service. May 12 13:37:17.909471 systemd-modules-load[240]: Inserted module 'br_netfilter' May 12 13:37:17.910378 kernel: Bridge firewalling registered May 12 13:37:17.918348 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 13:37:17.919537 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 12 13:37:17.923886 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:37:17.925531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 13:37:17.935407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 13:37:17.936911 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 13:37:17.939882 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 12 13:37:17.943837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:37:17.945658 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:37:17.947250 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 12 13:37:17.950471 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:37:17.953361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 13:37:17.957051 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=653a96bf2da883847e3396e932e31f09e53181a834ffc22434c3993d29b70a16 May 12 13:37:17.996801 systemd-resolved[291]: Positive Trust Anchors: May 12 13:37:17.996818 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 13:37:17.996848 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 13:37:18.001885 systemd-resolved[291]: Defaulting to hostname 'linux'. May 12 13:37:18.005295 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 13:37:18.007080 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 13:37:18.036060 kernel: SCSI subsystem initialized May 12 13:37:18.041061 kernel: Loading iSCSI transport class v2.0-870. May 12 13:37:18.048075 kernel: iscsi: registered transport (tcp) May 12 13:37:18.060481 kernel: iscsi: registered transport (qla4xxx) May 12 13:37:18.060506 kernel: QLogic iSCSI HBA Driver May 12 13:37:18.078658 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 13:37:18.094371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:37:18.095940 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 13:37:18.140960 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 12 13:37:18.143178 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 12 13:37:18.204077 kernel: raid6: neonx8 gen() 15796 MB/s May 12 13:37:18.221071 kernel: raid6: neonx4 gen() 15802 MB/s May 12 13:37:18.238069 kernel: raid6: neonx2 gen() 13204 MB/s May 12 13:37:18.255061 kernel: raid6: neonx1 gen() 10511 MB/s May 12 13:37:18.272059 kernel: raid6: int64x8 gen() 6788 MB/s May 12 13:37:18.289060 kernel: raid6: int64x4 gen() 7344 MB/s May 12 13:37:18.306066 kernel: raid6: int64x2 gen() 6108 MB/s May 12 13:37:18.323230 kernel: raid6: int64x1 gen() 5047 MB/s May 12 13:37:18.323251 kernel: raid6: using algorithm neonx4 gen() 15802 MB/s May 12 13:37:18.341130 kernel: raid6: .... xor() 12404 MB/s, rmw enabled May 12 13:37:18.341159 kernel: raid6: using neon recovery algorithm May 12 13:37:18.346291 kernel: xor: measuring software checksum speed May 12 13:37:18.346304 kernel: 8regs : 21516 MB/sec May 12 13:37:18.347069 kernel: 32regs : 20802 MB/sec May 12 13:37:18.348204 kernel: arm64_neon : 23328 MB/sec May 12 13:37:18.348215 kernel: xor: using function: arm64_neon (23328 MB/sec) May 12 13:37:18.401069 kernel: Btrfs loaded, zoned=no, fsverity=no May 12 13:37:18.407333 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 12 13:37:18.409813 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:37:18.439269 systemd-udevd[495]: Using default interface naming scheme 'v255'. May 12 13:37:18.443595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:37:18.447265 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 12 13:37:18.468181 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 12 13:37:18.489560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 12 13:37:18.491905 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 13:37:18.549765 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:37:18.552350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 12 13:37:18.601063 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 12 13:37:18.601241 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 12 13:37:18.612385 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 12 13:37:18.612437 kernel: GPT:9289727 != 19775487 May 12 13:37:18.612446 kernel: GPT:Alternate GPT header not at the end of the disk. May 12 13:37:18.612455 kernel: GPT:9289727 != 19775487 May 12 13:37:18.613383 kernel: GPT: Use GNU Parted to correct GPT errors. May 12 13:37:18.613413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:37:18.616425 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 13:37:18.616540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:37:18.620164 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:37:18.622267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:37:18.633274 kernel: BTRFS: device fsid 883e681e-770a-479b-951e-bb0dc342f721 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (542) May 12 13:37:18.638066 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (540) May 12 13:37:18.644071 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 12 13:37:18.646638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:37:18.656698 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 12 13:37:18.662639 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 12 13:37:18.673864 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 12 13:37:18.675158 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 12 13:37:18.684581 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 13:37:18.685829 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 12 13:37:18.687950 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:37:18.690105 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 13:37:18.692882 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 12 13:37:18.694741 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 12 13:37:18.708861 disk-uuid[587]: Primary Header is updated. May 12 13:37:18.708861 disk-uuid[587]: Secondary Entries is updated. May 12 13:37:18.708861 disk-uuid[587]: Secondary Header is updated. May 12 13:37:18.714739 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 12 13:37:18.717535 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:37:18.721066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:37:19.723080 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 12 13:37:19.723657 disk-uuid[592]: The operation has completed successfully. May 12 13:37:19.747951 systemd[1]: disk-uuid.service: Deactivated successfully. May 12 13:37:19.749135 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 12 13:37:19.775782 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 12 13:37:19.794073 sh[607]: Success May 12 13:37:19.810118 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 12 13:37:19.810698 kernel: device-mapper: uevent: version 1.0.3 May 12 13:37:19.810709 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 12 13:37:19.822067 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 12 13:37:19.847735 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 12 13:37:19.850670 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 12 13:37:19.869311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 12 13:37:19.879059 kernel: BTRFS info (device dm-0): first mount of filesystem 883e681e-770a-479b-951e-bb0dc342f721 May 12 13:37:19.879102 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 12 13:37:19.879113 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 12 13:37:19.879123 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 12 13:37:19.880434 kernel: BTRFS info (device dm-0): using free space tree May 12 13:37:19.883639 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 12 13:37:19.884974 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 12 13:37:19.886349 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 12 13:37:19.887146 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 12 13:37:19.888714 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 12 13:37:19.917625 kernel: BTRFS info (device vda6): first mount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:37:19.917685 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 13:37:19.917696 kernel: BTRFS info (device vda6): using free space tree May 12 13:37:19.921071 kernel: BTRFS info (device vda6): auto enabling async discard May 12 13:37:19.925066 kernel: BTRFS info (device vda6): last unmount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:37:19.929118 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 12 13:37:19.931128 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 12 13:37:19.995084 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 13:37:19.998884 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 13:37:20.046088 systemd-networkd[791]: lo: Link UP May 12 13:37:20.046100 systemd-networkd[791]: lo: Gained carrier May 12 13:37:20.046899 systemd-networkd[791]: Enumeration completed May 12 13:37:20.047417 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:37:20.047420 systemd-networkd[791]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 13:37:20.048107 systemd-networkd[791]: eth0: Link UP May 12 13:37:20.048110 systemd-networkd[791]: eth0: Gained carrier May 12 13:37:20.048119 systemd-networkd[791]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:37:20.048155 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 13:37:20.049358 systemd[1]: Reached target network.target - Network. May 12 13:37:20.062082 systemd-networkd[791]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 13:37:20.076451 ignition[702]: Ignition 2.21.0 May 12 13:37:20.076468 ignition[702]: Stage: fetch-offline May 12 13:37:20.076509 ignition[702]: no configs at "/usr/lib/ignition/base.d" May 12 13:37:20.076517 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:37:20.076818 ignition[702]: parsed url from cmdline: "" May 12 13:37:20.076821 ignition[702]: no config URL provided May 12 13:37:20.076826 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" May 12 13:37:20.076834 ignition[702]: no config at "/usr/lib/ignition/user.ign" May 12 13:37:20.076868 ignition[702]: op(1): [started] loading QEMU firmware config module May 12 13:37:20.076873 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" May 12 13:37:20.086140 ignition[702]: op(1): [finished] loading QEMU firmware config module May 12 13:37:20.122846 ignition[702]: parsing config with SHA512: 12f56a3833c9bbb2a3e2ffd351502edecbdcdabc682a7cddaf34eb1a827c02e61a19e6b026c31cfd0040296be44b4f77a2770493caf2f611972f9cab32738227 May 12 13:37:20.129575 unknown[702]: fetched base config from "system" May 12 13:37:20.129586 unknown[702]: fetched user config from "qemu" May 12 13:37:20.129998 ignition[702]: fetch-offline: fetch-offline passed May 12 13:37:20.130072 ignition[702]: Ignition finished successfully May 12 13:37:20.132874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 12 13:37:20.134280 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 12 13:37:20.135092 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 12 13:37:20.157479 ignition[805]: Ignition 2.21.0 May 12 13:37:20.157496 ignition[805]: Stage: kargs May 12 13:37:20.157633 ignition[805]: no configs at "/usr/lib/ignition/base.d" May 12 13:37:20.157642 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:37:20.159617 ignition[805]: kargs: kargs passed May 12 13:37:20.162106 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 12 13:37:20.159690 ignition[805]: Ignition finished successfully May 12 13:37:20.164437 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 12 13:37:20.190644 ignition[814]: Ignition 2.21.0 May 12 13:37:20.190663 ignition[814]: Stage: disks May 12 13:37:20.190800 ignition[814]: no configs at "/usr/lib/ignition/base.d" May 12 13:37:20.190809 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:37:20.194310 ignition[814]: disks: disks passed May 12 13:37:20.194391 ignition[814]: Ignition finished successfully May 12 13:37:20.196085 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 12 13:37:20.197279 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 12 13:37:20.199033 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 12 13:37:20.201235 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 13:37:20.203218 systemd[1]: Reached target sysinit.target - System Initialization. May 12 13:37:20.205032 systemd[1]: Reached target basic.target - Basic System. May 12 13:37:20.207801 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 12 13:37:20.231174 systemd-fsck[824]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 12 13:37:20.234816 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 12 13:37:20.237165 systemd[1]: Mounting sysroot.mount - /sysroot... May 12 13:37:20.297058 kernel: EXT4-fs (vda9): mounted filesystem bc1f18c3-3425-4388-a617-b7347003d935 r/w with ordered data mode. Quota mode: none. May 12 13:37:20.297587 systemd[1]: Mounted sysroot.mount - /sysroot. May 12 13:37:20.298894 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 12 13:37:20.301270 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 13:37:20.302968 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 12 13:37:20.303997 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 12 13:37:20.304114 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 12 13:37:20.304143 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 12 13:37:20.322380 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 12 13:37:20.325539 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 12 13:37:20.328955 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (832) May 12 13:37:20.328977 kernel: BTRFS info (device vda6): first mount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:37:20.328987 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 13:37:20.330701 kernel: BTRFS info (device vda6): using free space tree May 12 13:37:20.332356 kernel: BTRFS info (device vda6): auto enabling async discard May 12 13:37:20.333094 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 13:37:20.369062 initrd-setup-root[856]: cut: /sysroot/etc/passwd: No such file or directory May 12 13:37:20.372457 initrd-setup-root[863]: cut: /sysroot/etc/group: No such file or directory May 12 13:37:20.376663 initrd-setup-root[870]: cut: /sysroot/etc/shadow: No such file or directory May 12 13:37:20.380672 initrd-setup-root[877]: cut: /sysroot/etc/gshadow: No such file or directory May 12 13:37:20.455141 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 12 13:37:20.457199 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 12 13:37:20.458835 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 12 13:37:20.474066 kernel: BTRFS info (device vda6): last unmount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:37:20.484592 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 12 13:37:20.495504 ignition[946]: INFO : Ignition 2.21.0 May 12 13:37:20.495504 ignition[946]: INFO : Stage: mount May 12 13:37:20.497607 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:37:20.497607 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:37:20.497607 ignition[946]: INFO : mount: mount passed May 12 13:37:20.497607 ignition[946]: INFO : Ignition finished successfully May 12 13:37:20.499620 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 12 13:37:20.501547 systemd[1]: Starting ignition-files.service - Ignition (files)... May 12 13:37:20.884355 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 12 13:37:20.885897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 12 13:37:20.903056 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (959) May 12 13:37:20.905108 kernel: BTRFS info (device vda6): first mount of filesystem c2183054-24ef-4008-8a3e-033aff1dab63 May 12 13:37:20.905143 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 12 13:37:20.906051 kernel: BTRFS info (device vda6): using free space tree May 12 13:37:20.908064 kernel: BTRFS info (device vda6): auto enabling async discard May 12 13:37:20.909176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 12 13:37:20.944293 ignition[976]: INFO : Ignition 2.21.0 May 12 13:37:20.944293 ignition[976]: INFO : Stage: files May 12 13:37:20.945939 ignition[976]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:37:20.945939 ignition[976]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:37:20.945939 ignition[976]: DEBUG : files: compiled without relabeling support, skipping May 12 13:37:20.949457 ignition[976]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 12 13:37:20.949457 ignition[976]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 12 13:37:20.949457 ignition[976]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 12 13:37:20.953649 ignition[976]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 12 13:37:20.953649 ignition[976]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 12 13:37:20.953649 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 12 13:37:20.953649 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 12 13:37:20.949977 unknown[976]: wrote ssh authorized keys file for user: core May 12 13:37:21.882179 systemd-networkd[791]: eth0: Gained IPv6LL May 12 13:37:22.040036 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 12 13:37:22.271886 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 12 13:37:22.271886 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 12 13:37:22.275612 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 12 13:37:22.623254 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 12 13:37:22.683271 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 13:37:22.685157 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 12 13:37:22.957420 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 12 13:37:23.219855 ignition[976]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 12 13:37:23.219855 ignition[976]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 12 13:37:23.223615 ignition[976]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 12 13:37:23.238147 ignition[976]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 12 13:37:23.241075 ignition[976]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 12 13:37:23.242561 ignition[976]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 12 13:37:23.242561 ignition[976]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 12 13:37:23.242561 ignition[976]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 12 13:37:23.242561 ignition[976]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 12 13:37:23.242561 ignition[976]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 12 13:37:23.242561 ignition[976]: INFO : files: files passed May 12 13:37:23.242561 ignition[976]: INFO : Ignition finished successfully May 12 13:37:23.244513 systemd[1]: Finished ignition-files.service - Ignition (files). May 12 13:37:23.247361 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 12 13:37:23.250191 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 12 13:37:23.257265 systemd[1]: ignition-quench.service: Deactivated successfully. May 12 13:37:23.258884 initrd-setup-root-after-ignition[1005]: grep: /sysroot/oem/oem-release: No such file or directory May 12 13:37:23.257365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 12 13:37:23.263024 initrd-setup-root-after-ignition[1007]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 13:37:23.263024 initrd-setup-root-after-ignition[1007]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 12 13:37:23.266897 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 12 13:37:23.263352 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 13:37:23.265714 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 12 13:37:23.268537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 12 13:37:23.317961 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 12 13:37:23.318071 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 12 13:37:23.320230 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 12 13:37:23.321233 systemd[1]: Reached target initrd.target - Initrd Default Target. May 12 13:37:23.323178 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 12 13:37:23.323860 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 12 13:37:23.340070 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 13:37:23.342312 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 12 13:37:23.359309 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 12 13:37:23.360511 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:37:23.362521 systemd[1]: Stopped target timers.target - Timer Units. May 12 13:37:23.364270 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 12 13:37:23.364389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 12 13:37:23.366824 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 12 13:37:23.367879 systemd[1]: Stopped target basic.target - Basic System. May 12 13:37:23.369676 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 12 13:37:23.371438 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 12 13:37:23.373197 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 12 13:37:23.375060 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 12 13:37:23.377010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 12 13:37:23.378842 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 12 13:37:23.380844 systemd[1]: Stopped target sysinit.target - System Initialization. May 12 13:37:23.382813 systemd[1]: Stopped target local-fs.target - Local File Systems. May 12 13:37:23.384668 systemd[1]: Stopped target swap.target - Swaps. May 12 13:37:23.386254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 12 13:37:23.386373 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 12 13:37:23.388604 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 12 13:37:23.389723 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:37:23.391554 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 12 13:37:23.396069 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:37:23.397275 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 12 13:37:23.397391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 12 13:37:23.400086 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 12 13:37:23.400201 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 12 13:37:23.402178 systemd[1]: Stopped target paths.target - Path Units. May 12 13:37:23.403775 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 12 13:37:23.409084 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:37:23.410405 systemd[1]: Stopped target slices.target - Slice Units. May 12 13:37:23.412444 systemd[1]: Stopped target sockets.target - Socket Units. May 12 13:37:23.413967 systemd[1]: iscsid.socket: Deactivated successfully. May 12 13:37:23.414063 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 12 13:37:23.415586 systemd[1]: iscsiuio.socket: Deactivated successfully. May 12 13:37:23.415662 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 12 13:37:23.417161 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 12 13:37:23.417271 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 12 13:37:23.419032 systemd[1]: ignition-files.service: Deactivated successfully. May 12 13:37:23.419145 systemd[1]: Stopped ignition-files.service - Ignition (files). May 12 13:37:23.421334 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 12 13:37:23.422197 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 12 13:37:23.422348 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:37:23.431538 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 12 13:37:23.432402 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 12 13:37:23.432535 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:37:23.434338 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 12 13:37:23.434437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 12 13:37:23.441113 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 12 13:37:23.441240 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 12 13:37:23.446066 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 12 13:37:23.448071 ignition[1031]: INFO : Ignition 2.21.0 May 12 13:37:23.448071 ignition[1031]: INFO : Stage: umount May 12 13:37:23.448071 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" May 12 13:37:23.448071 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 12 13:37:23.448071 ignition[1031]: INFO : umount: umount passed May 12 13:37:23.448071 ignition[1031]: INFO : Ignition finished successfully May 12 13:37:23.447873 systemd[1]: ignition-mount.service: Deactivated successfully. May 12 13:37:23.447965 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 12 13:37:23.449192 systemd[1]: Stopped target network.target - Network. May 12 13:37:23.450497 systemd[1]: ignition-disks.service: Deactivated successfully. May 12 13:37:23.450553 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 12 13:37:23.452018 systemd[1]: ignition-kargs.service: Deactivated successfully. May 12 13:37:23.452114 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 12 13:37:23.453887 systemd[1]: ignition-setup.service: Deactivated successfully. May 12 13:37:23.453931 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 12 13:37:23.456018 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 12 13:37:23.456072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 12 13:37:23.457860 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 12 13:37:23.459612 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 12 13:37:23.465837 systemd[1]: systemd-resolved.service: Deactivated successfully. May 12 13:37:23.465942 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 12 13:37:23.468873 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 12 13:37:23.469153 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 12 13:37:23.469193 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:37:23.472410 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 12 13:37:23.472638 systemd[1]: systemd-networkd.service: Deactivated successfully. May 12 13:37:23.472722 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 12 13:37:23.475440 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 12 13:37:23.475816 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 12 13:37:23.477805 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 12 13:37:23.477846 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 12 13:37:23.480481 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 12 13:37:23.481624 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 12 13:37:23.481686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 12 13:37:23.483751 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 13:37:23.483797 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 13:37:23.486950 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 12 13:37:23.486992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 12 13:37:23.488925 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:37:23.492202 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 13:37:23.503605 systemd[1]: network-cleanup.service: Deactivated successfully. May 12 13:37:23.503706 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 12 13:37:23.505822 systemd[1]: systemd-udevd.service: Deactivated successfully. May 12 13:37:23.505936 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:37:23.509378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 12 13:37:23.509444 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 12 13:37:23.510622 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 12 13:37:23.510655 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:37:23.511724 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 12 13:37:23.511772 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 12 13:37:23.515416 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 12 13:37:23.515464 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 12 13:37:23.517559 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 12 13:37:23.517605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 12 13:37:23.520474 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 12 13:37:23.521838 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 12 13:37:23.521896 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:37:23.524739 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 12 13:37:23.524781 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:37:23.527680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 12 13:37:23.527720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:37:23.532262 systemd[1]: sysroot-boot.service: Deactivated successfully. May 12 13:37:23.532397 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 12 13:37:23.533909 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 12 13:37:23.533993 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 12 13:37:23.536346 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 12 13:37:23.536431 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 12 13:37:23.538524 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 12 13:37:23.540906 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 12 13:37:23.559337 systemd[1]: Switching root. May 12 13:37:23.588809 systemd-journald[239]: Journal stopped May 12 13:37:24.365228 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 12 13:37:24.365282 kernel: SELinux: policy capability network_peer_controls=1 May 12 13:37:24.365295 kernel: SELinux: policy capability open_perms=1 May 12 13:37:24.365304 kernel: SELinux: policy capability extended_socket_class=1 May 12 13:37:24.365321 kernel: SELinux: policy capability always_check_network=0 May 12 13:37:24.365331 kernel: SELinux: policy capability cgroup_seclabel=1 May 12 13:37:24.365340 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 12 13:37:24.365350 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 12 13:37:24.365363 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 12 13:37:24.365374 kernel: audit: type=1403 audit(1747057043.755:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 12 13:37:24.365386 systemd[1]: Successfully loaded SELinux policy in 37.049ms. May 12 13:37:24.365410 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.548ms. May 12 13:37:24.365422 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 12 13:37:24.365433 systemd[1]: Detected virtualization kvm. May 12 13:37:24.365443 systemd[1]: Detected architecture arm64. May 12 13:37:24.365459 systemd[1]: Detected first boot. May 12 13:37:24.365468 systemd[1]: Initializing machine ID from VM UUID. May 12 13:37:24.365478 zram_generator::config[1078]: No configuration found. May 12 13:37:24.365491 kernel: NET: Registered PF_VSOCK protocol family May 12 13:37:24.365500 systemd[1]: Populated /etc with preset unit settings. May 12 13:37:24.365511 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 12 13:37:24.365521 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 12 13:37:24.365531 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 12 13:37:24.365541 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 12 13:37:24.365551 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 12 13:37:24.365561 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 12 13:37:24.365571 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 12 13:37:24.365583 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 12 13:37:24.365596 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 12 13:37:24.365606 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 12 13:37:24.365616 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 12 13:37:24.365626 systemd[1]: Created slice user.slice - User and Session Slice. May 12 13:37:24.365636 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 12 13:37:24.365647 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 12 13:37:24.365657 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 12 13:37:24.365667 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 12 13:37:24.365679 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 12 13:37:24.365689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 12 13:37:24.365699 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 12 13:37:24.365710 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 12 13:37:24.365720 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 12 13:37:24.365730 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 12 13:37:24.365740 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 12 13:37:24.365751 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 12 13:37:24.365761 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 12 13:37:24.365771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 12 13:37:24.365781 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 12 13:37:24.365791 systemd[1]: Reached target slices.target - Slice Units. May 12 13:37:24.365801 systemd[1]: Reached target swap.target - Swaps. May 12 13:37:24.365811 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 12 13:37:24.365821 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 12 13:37:24.365831 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 12 13:37:24.365843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 12 13:37:24.365853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 12 13:37:24.365863 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 12 13:37:24.365873 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 12 13:37:24.365883 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 12 13:37:24.365894 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 12 13:37:24.365904 systemd[1]: Mounting media.mount - External Media Directory... May 12 13:37:24.365914 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 12 13:37:24.365924 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 12 13:37:24.365935 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 12 13:37:24.365946 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 12 13:37:24.365956 systemd[1]: Reached target machines.target - Containers. May 12 13:37:24.365966 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 12 13:37:24.365976 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:37:24.365987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 12 13:37:24.365997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 12 13:37:24.366007 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:37:24.366017 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 13:37:24.366029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:37:24.366047 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 12 13:37:24.366059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:37:24.366069 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 12 13:37:24.366080 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 12 13:37:24.366089 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 12 13:37:24.366099 kernel: fuse: init (API version 7.39) May 12 13:37:24.366108 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 12 13:37:24.366121 systemd[1]: Stopped systemd-fsck-usr.service. May 12 13:37:24.366130 kernel: loop: module loaded May 12 13:37:24.366141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:37:24.366151 kernel: ACPI: bus type drm_connector registered May 12 13:37:24.366160 systemd[1]: Starting systemd-journald.service - Journal Service... May 12 13:37:24.366170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 12 13:37:24.366180 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 12 13:37:24.366190 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 12 13:37:24.366200 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 12 13:37:24.366212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 12 13:37:24.366222 systemd[1]: verity-setup.service: Deactivated successfully. May 12 13:37:24.366232 systemd[1]: Stopped verity-setup.service. May 12 13:37:24.366242 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 12 13:37:24.366252 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 12 13:37:24.366263 systemd[1]: Mounted media.mount - External Media Directory. May 12 13:37:24.366273 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 12 13:37:24.366283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 12 13:37:24.366293 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 12 13:37:24.366303 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 12 13:37:24.366341 systemd-journald[1150]: Collecting audit messages is disabled. May 12 13:37:24.366362 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 12 13:37:24.366377 systemd-journald[1150]: Journal started May 12 13:37:24.366398 systemd-journald[1150]: Runtime Journal (/run/log/journal/30e5961f022f4a2184fcb3fd49c736fd) is 5.9M, max 47.3M, 41.4M free. May 12 13:37:24.141498 systemd[1]: Queued start job for default target multi-user.target. May 12 13:37:24.158834 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 12 13:37:24.159229 systemd[1]: systemd-journald.service: Deactivated successfully. May 12 13:37:24.368780 systemd[1]: Started systemd-journald.service - Journal Service. May 12 13:37:24.369533 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 12 13:37:24.369694 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 12 13:37:24.371172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:37:24.372095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:37:24.373423 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 13:37:24.373584 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 13:37:24.374928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:37:24.375124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:37:24.376583 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 12 13:37:24.376742 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 12 13:37:24.378036 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:37:24.378205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:37:24.379637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 12 13:37:24.380990 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 12 13:37:24.382563 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 12 13:37:24.384236 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 12 13:37:24.395989 systemd[1]: Reached target network-pre.target - Preparation for Network. May 12 13:37:24.398423 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 12 13:37:24.400376 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 12 13:37:24.401583 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 12 13:37:24.401609 systemd[1]: Reached target local-fs.target - Local File Systems. May 12 13:37:24.403447 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 12 13:37:24.407825 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 12 13:37:24.408966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:37:24.409914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 12 13:37:24.411776 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 12 13:37:24.413011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:37:24.414250 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 12 13:37:24.415327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:37:24.418871 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:37:24.421057 systemd-journald[1150]: Time spent on flushing to /var/log/journal/30e5961f022f4a2184fcb3fd49c736fd is 13.699ms for 880 entries. May 12 13:37:24.421057 systemd-journald[1150]: System Journal (/var/log/journal/30e5961f022f4a2184fcb3fd49c736fd) is 8M, max 195.6M, 187.6M free. May 12 13:37:24.443023 systemd-journald[1150]: Received client request to flush runtime journal. May 12 13:37:24.443078 kernel: loop0: detected capacity change from 0 to 107312 May 12 13:37:24.421034 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 12 13:37:24.424559 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 12 13:37:24.428558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 12 13:37:24.430134 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 12 13:37:24.431522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 12 13:37:24.435077 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 12 13:37:24.439201 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 12 13:37:24.445127 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 12 13:37:24.449073 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 12 13:37:24.458071 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 12 13:37:24.459161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:37:24.475876 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 12 13:37:24.481214 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 12 13:37:24.483204 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 12 13:37:24.483826 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 12 13:37:24.488060 kernel: loop1: detected capacity change from 0 to 138376 May 12 13:37:24.509760 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. May 12 13:37:24.509776 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. May 12 13:37:24.513862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 12 13:37:24.529052 kernel: loop2: detected capacity change from 0 to 201592 May 12 13:37:24.564073 kernel: loop3: detected capacity change from 0 to 107312 May 12 13:37:24.570055 kernel: loop4: detected capacity change from 0 to 138376 May 12 13:37:24.577054 kernel: loop5: detected capacity change from 0 to 201592 May 12 13:37:24.580833 (sd-merge)[1217]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 12 13:37:24.581572 (sd-merge)[1217]: Merged extensions into '/usr'. May 12 13:37:24.584996 systemd[1]: Reload requested from client PID 1194 ('systemd-sysext') (unit systemd-sysext.service)... May 12 13:37:24.585009 systemd[1]: Reloading... May 12 13:37:24.641489 zram_generator::config[1243]: No configuration found. May 12 13:37:24.676143 ldconfig[1189]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 12 13:37:24.729151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:37:24.791122 systemd[1]: Reloading finished in 205 ms. May 12 13:37:24.815135 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 12 13:37:24.816651 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 12 13:37:24.836394 systemd[1]: Starting ensure-sysext.service... May 12 13:37:24.839996 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 12 13:37:24.854977 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 12 13:37:24.855007 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 12 13:37:24.855251 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 12 13:37:24.855455 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 12 13:37:24.856167 systemd-tmpfiles[1278]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 12 13:37:24.856394 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 12 13:37:24.856441 systemd-tmpfiles[1278]: ACLs are not supported, ignoring. May 12 13:37:24.856758 systemd[1]: Reload requested from client PID 1277 ('systemctl') (unit ensure-sysext.service)... May 12 13:37:24.856776 systemd[1]: Reloading... May 12 13:37:24.858955 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 12 13:37:24.858962 systemd-tmpfiles[1278]: Skipping /boot May 12 13:37:24.868974 systemd-tmpfiles[1278]: Detected autofs mount point /boot during canonicalization of boot. May 12 13:37:24.868994 systemd-tmpfiles[1278]: Skipping /boot May 12 13:37:24.902084 zram_generator::config[1305]: No configuration found. May 12 13:37:24.968240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:37:25.029089 systemd[1]: Reloading finished in 172 ms. May 12 13:37:25.042547 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 12 13:37:25.050064 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 12 13:37:25.059464 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:37:25.061680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 12 13:37:25.080799 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 12 13:37:25.083714 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 12 13:37:25.090163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 12 13:37:25.094405 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 12 13:37:25.104385 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 12 13:37:25.106982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:37:25.114132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:37:25.116226 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:37:25.118190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:37:25.119216 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:37:25.119332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:37:25.123091 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 12 13:37:25.124774 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 12 13:37:25.133516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:37:25.133667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:37:25.135360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:37:25.135500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:37:25.136935 systemd-udevd[1346]: Using default interface naming scheme 'v255'. May 12 13:37:25.137190 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:37:25.137362 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:37:25.145257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:37:25.147505 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:37:25.151439 augenrules[1378]: No rules May 12 13:37:25.151322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:37:25.155335 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:37:25.156449 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:37:25.156621 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:37:25.158816 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 12 13:37:25.160868 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 13:37:25.163866 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 12 13:37:25.165489 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 12 13:37:25.167420 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:37:25.167657 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:37:25.172030 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:37:25.172323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:37:25.174661 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:37:25.174810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:37:25.176571 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:37:25.176710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:37:25.182363 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 12 13:37:25.185097 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 12 13:37:25.198125 systemd[1]: Finished ensure-sysext.service. May 12 13:37:25.204451 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:37:25.205773 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 12 13:37:25.207886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 12 13:37:25.209758 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 12 13:37:25.212089 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 12 13:37:25.218924 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 12 13:37:25.221415 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 12 13:37:25.221469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 12 13:37:25.224207 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 12 13:37:25.227002 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 12 13:37:25.231064 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1408) May 12 13:37:25.231210 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 12 13:37:25.231794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 12 13:37:25.231950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 12 13:37:25.242295 systemd[1]: modprobe@drm.service: Deactivated successfully. May 12 13:37:25.242469 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 12 13:37:25.244976 augenrules[1424]: /sbin/augenrules: No change May 12 13:37:25.258000 augenrules[1451]: No rules May 12 13:37:25.258165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 12 13:37:25.258356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 12 13:37:25.260455 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:37:25.260660 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:37:25.262305 systemd[1]: modprobe@loop.service: Deactivated successfully. May 12 13:37:25.262473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 12 13:37:25.274710 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 12 13:37:25.288148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 12 13:37:25.290514 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 12 13:37:25.291718 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 12 13:37:25.291782 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 12 13:37:25.318092 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 12 13:37:25.347761 systemd-resolved[1344]: Positive Trust Anchors: May 12 13:37:25.350478 systemd-resolved[1344]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 12 13:37:25.350594 systemd-resolved[1344]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 12 13:37:25.355767 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 12 13:37:25.357924 systemd[1]: Reached target time-set.target - System Time Set. May 12 13:37:25.362996 systemd-resolved[1344]: Defaulting to hostname 'linux'. May 12 13:37:25.367817 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 12 13:37:25.369843 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 12 13:37:25.371544 systemd[1]: Reached target sysinit.target - System Initialization. May 12 13:37:25.375256 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 12 13:37:25.376528 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 12 13:37:25.377877 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 12 13:37:25.379088 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 12 13:37:25.380445 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 12 13:37:25.381752 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 12 13:37:25.381787 systemd[1]: Reached target paths.target - Path Units. May 12 13:37:25.382732 systemd[1]: Reached target timers.target - Timer Units. May 12 13:37:25.383375 systemd-networkd[1429]: lo: Link UP May 12 13:37:25.383386 systemd-networkd[1429]: lo: Gained carrier May 12 13:37:25.384163 systemd-networkd[1429]: Enumeration completed May 12 13:37:25.384657 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:37:25.384668 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 12 13:37:25.385194 systemd-networkd[1429]: eth0: Link UP May 12 13:37:25.385200 systemd-networkd[1429]: eth0: Gained carrier May 12 13:37:25.385213 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 12 13:37:25.386883 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 12 13:37:25.389151 systemd[1]: Starting docker.socket - Docker Socket for the API... May 12 13:37:25.393626 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 12 13:37:25.394989 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 12 13:37:25.396942 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 12 13:37:25.402721 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 12 13:37:25.405475 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 12 13:37:25.407170 systemd[1]: Started systemd-networkd.service - Network Configuration. May 12 13:37:25.408418 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 12 13:37:25.415691 systemd[1]: Reached target network.target - Network. May 12 13:37:25.416684 systemd[1]: Reached target sockets.target - Socket Units. May 12 13:37:25.417094 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 12 13:37:25.417692 systemd[1]: Reached target basic.target - Basic System. May 12 13:37:25.418402 systemd-timesyncd[1435]: Network configuration changed, trying to establish connection. May 12 13:37:25.418733 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 12 13:37:25.418816 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 12 13:37:25.420007 systemd-timesyncd[1435]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 12 13:37:25.420066 systemd-timesyncd[1435]: Initial clock synchronization to Mon 2025-05-12 13:37:25.085144 UTC. May 12 13:37:25.423229 systemd[1]: Starting containerd.service - containerd container runtime... May 12 13:37:25.428529 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 12 13:37:25.430385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 12 13:37:25.432400 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 12 13:37:25.444616 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 12 13:37:25.445604 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 12 13:37:25.446673 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 12 13:37:25.451216 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 12 13:37:25.452616 jq[1488]: false May 12 13:37:25.453105 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 12 13:37:25.455182 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 12 13:37:25.467759 systemd[1]: Starting systemd-logind.service - User Login Management... May 12 13:37:25.471283 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 12 13:37:25.473474 extend-filesystems[1489]: Found loop3 May 12 13:37:25.474219 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 12 13:37:25.474449 extend-filesystems[1489]: Found loop4 May 12 13:37:25.476584 extend-filesystems[1489]: Found loop5 May 12 13:37:25.476584 extend-filesystems[1489]: Found vda May 12 13:37:25.476584 extend-filesystems[1489]: Found vda1 May 12 13:37:25.476584 extend-filesystems[1489]: Found vda2 May 12 13:37:25.476584 extend-filesystems[1489]: Found vda3 May 12 13:37:25.476584 extend-filesystems[1489]: Found usr May 12 13:37:25.476584 extend-filesystems[1489]: Found vda4 May 12 13:37:25.476584 extend-filesystems[1489]: Found vda6 May 12 13:37:25.476584 extend-filesystems[1489]: Found vda7 May 12 13:37:25.476584 extend-filesystems[1489]: Found vda9 May 12 13:37:25.476584 extend-filesystems[1489]: Checking size of /dev/vda9 May 12 13:37:25.477881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 12 13:37:25.480174 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 12 13:37:25.480632 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 12 13:37:25.482710 systemd[1]: Starting update-engine.service - Update Engine... May 12 13:37:25.493466 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 12 13:37:25.497264 extend-filesystems[1489]: Resized partition /dev/vda9 May 12 13:37:25.500880 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 12 13:37:25.502612 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 12 13:37:25.502776 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 12 13:37:25.503182 systemd[1]: motdgen.service: Deactivated successfully. May 12 13:37:25.503354 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 12 13:37:25.504073 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1393) May 12 13:37:25.510639 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 12 13:37:25.510844 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 12 13:37:25.523856 extend-filesystems[1514]: resize2fs 1.47.2 (1-Jan-2025) May 12 13:37:25.527151 update_engine[1508]: I20250512 13:37:25.526422 1508 main.cc:92] Flatcar Update Engine starting May 12 13:37:25.529859 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 12 13:37:25.539184 jq[1512]: true May 12 13:37:25.549544 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 12 13:37:25.553054 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 12 13:37:25.553297 jq[1526]: true May 12 13:37:25.555652 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 12 13:37:25.573962 dbus-daemon[1486]: [system] SELinux support is enabled May 12 13:37:25.577406 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 12 13:37:25.577406 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 1 May 12 13:37:25.577406 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 12 13:37:25.577180 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 12 13:37:25.587374 extend-filesystems[1489]: Resized filesystem in /dev/vda9 May 12 13:37:25.593389 update_engine[1508]: I20250512 13:37:25.583411 1508 update_check_scheduler.cc:74] Next update check in 10m22s May 12 13:37:25.583270 systemd[1]: extend-filesystems.service: Deactivated successfully. May 12 13:37:25.584241 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 12 13:37:25.588614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 12 13:37:25.595087 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 12 13:37:25.595731 tar[1516]: linux-arm64/LICENSE May 12 13:37:25.595731 tar[1516]: linux-arm64/helm May 12 13:37:25.595118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 12 13:37:25.596976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 12 13:37:25.597005 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 12 13:37:25.598592 systemd[1]: Started update-engine.service - Update Engine. May 12 13:37:25.601017 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 12 13:37:25.603792 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (Power Button) May 12 13:37:25.604006 systemd-logind[1500]: New seat seat0. May 12 13:37:25.606402 systemd[1]: Started systemd-logind.service - User Login Management. May 12 13:37:25.628957 bash[1552]: Updated "/home/core/.ssh/authorized_keys" May 12 13:37:25.631971 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 12 13:37:25.633809 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 12 13:37:25.668365 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 12 13:37:25.786565 containerd[1525]: time="2025-05-12T13:37:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 12 13:37:25.788164 containerd[1525]: time="2025-05-12T13:37:25.787945400Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 12 13:37:25.797771 containerd[1525]: time="2025-05-12T13:37:25.797728480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.4µs" May 12 13:37:25.797771 containerd[1525]: time="2025-05-12T13:37:25.797765480Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 12 13:37:25.797842 containerd[1525]: time="2025-05-12T13:37:25.797782960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 12 13:37:25.797934 containerd[1525]: time="2025-05-12T13:37:25.797914160Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 12 13:37:25.797976 containerd[1525]: time="2025-05-12T13:37:25.797935520Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 12 13:37:25.797976 containerd[1525]: time="2025-05-12T13:37:25.797958400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 13:37:25.798033 containerd[1525]: time="2025-05-12T13:37:25.798003840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 12 13:37:25.798033 containerd[1525]: time="2025-05-12T13:37:25.798020320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 13:37:25.798243 containerd[1525]: time="2025-05-12T13:37:25.798219680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 12 13:37:25.798243 containerd[1525]: time="2025-05-12T13:37:25.798241280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 13:37:25.798282 containerd[1525]: time="2025-05-12T13:37:25.798251200Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 12 13:37:25.798282 containerd[1525]: time="2025-05-12T13:37:25.798259240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 12 13:37:25.798357 containerd[1525]: time="2025-05-12T13:37:25.798337400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 12 13:37:25.798546 containerd[1525]: time="2025-05-12T13:37:25.798526320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 13:37:25.798576 containerd[1525]: time="2025-05-12T13:37:25.798560480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 12 13:37:25.798576 containerd[1525]: time="2025-05-12T13:37:25.798570440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 12 13:37:25.799083 containerd[1525]: time="2025-05-12T13:37:25.799061680Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 12 13:37:25.799319 containerd[1525]: time="2025-05-12T13:37:25.799267320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 12 13:37:25.799356 containerd[1525]: time="2025-05-12T13:37:25.799346240Z" level=info msg="metadata content store policy set" policy=shared May 12 13:37:25.803159 containerd[1525]: time="2025-05-12T13:37:25.803132160Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 12 13:37:25.803216 containerd[1525]: time="2025-05-12T13:37:25.803173360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 12 13:37:25.803216 containerd[1525]: time="2025-05-12T13:37:25.803199000Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 12 13:37:25.803216 containerd[1525]: time="2025-05-12T13:37:25.803210000Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 12 13:37:25.803279 containerd[1525]: time="2025-05-12T13:37:25.803220960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 12 13:37:25.803279 containerd[1525]: time="2025-05-12T13:37:25.803233000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 12 13:37:25.803279 containerd[1525]: time="2025-05-12T13:37:25.803243560Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 12 13:37:25.803279 containerd[1525]: time="2025-05-12T13:37:25.803255280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 12 13:37:25.803279 containerd[1525]: time="2025-05-12T13:37:25.803265360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 12 13:37:25.803279 containerd[1525]: time="2025-05-12T13:37:25.803274520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803283480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803295280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803402280Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803421760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803435920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803446760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803456400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803465960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 12 13:37:25.803472 containerd[1525]: time="2025-05-12T13:37:25.803476080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 12 13:37:25.803631 containerd[1525]: time="2025-05-12T13:37:25.803488800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 12 13:37:25.803631 containerd[1525]: time="2025-05-12T13:37:25.803500320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 12 13:37:25.803631 containerd[1525]: time="2025-05-12T13:37:25.803510240Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 12 13:37:25.803631 containerd[1525]: time="2025-05-12T13:37:25.803525480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 12 13:37:25.804718 containerd[1525]: time="2025-05-12T13:37:25.803922160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 12 13:37:25.804718 containerd[1525]: time="2025-05-12T13:37:25.803957480Z" level=info msg="Start snapshots syncer" May 12 13:37:25.804718 containerd[1525]: time="2025-05-12T13:37:25.803980800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 12 13:37:25.805501 containerd[1525]: time="2025-05-12T13:37:25.805448960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 12 13:37:25.805782 containerd[1525]: time="2025-05-12T13:37:25.805755320Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 12 13:37:25.806122 containerd[1525]: time="2025-05-12T13:37:25.806020000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 12 13:37:25.806324 containerd[1525]: time="2025-05-12T13:37:25.806292200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 12 13:37:25.806679 containerd[1525]: time="2025-05-12T13:37:25.806465360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 12 13:37:25.806679 containerd[1525]: time="2025-05-12T13:37:25.806610080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 12 13:37:25.806679 containerd[1525]: time="2025-05-12T13:37:25.806622440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 12 13:37:25.806959 containerd[1525]: time="2025-05-12T13:37:25.806634800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 12 13:37:25.806959 containerd[1525]: time="2025-05-12T13:37:25.806792960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 12 13:37:25.806959 containerd[1525]: time="2025-05-12T13:37:25.806807680Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 12 13:37:25.806959 containerd[1525]: time="2025-05-12T13:37:25.806835280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 12 13:37:25.806959 containerd[1525]: time="2025-05-12T13:37:25.806847800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 12 13:37:25.806959 containerd[1525]: time="2025-05-12T13:37:25.806858920Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 12 13:37:25.807144 containerd[1525]: time="2025-05-12T13:37:25.807124920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807223240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807237520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807246800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807255720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807270120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807281360Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807368560Z" level=info msg="runtime interface created" May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807376120Z" level=info msg="created NRI interface" May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807387720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807400080Z" level=info msg="Connect containerd service" May 12 13:37:25.807944 containerd[1525]: time="2025-05-12T13:37:25.807438120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 12 13:37:25.808843 containerd[1525]: time="2025-05-12T13:37:25.808294240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 13:37:25.920532 containerd[1525]: time="2025-05-12T13:37:25.920317560Z" level=info msg="Start subscribing containerd event" May 12 13:37:25.920532 containerd[1525]: time="2025-05-12T13:37:25.920475720Z" level=info msg="Start recovering state" May 12 13:37:25.920658 containerd[1525]: time="2025-05-12T13:37:25.920584960Z" level=info msg="Start event monitor" May 12 13:37:25.920658 containerd[1525]: time="2025-05-12T13:37:25.920602640Z" level=info msg="Start cni network conf syncer for default" May 12 13:37:25.920658 containerd[1525]: time="2025-05-12T13:37:25.920611840Z" level=info msg="Start streaming server" May 12 13:37:25.920708 containerd[1525]: time="2025-05-12T13:37:25.920675320Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 12 13:37:25.920708 containerd[1525]: time="2025-05-12T13:37:25.920687440Z" level=info msg="runtime interface starting up..." May 12 13:37:25.920708 containerd[1525]: time="2025-05-12T13:37:25.920693520Z" level=info msg="starting plugins..." May 12 13:37:25.920762 containerd[1525]: time="2025-05-12T13:37:25.920709000Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 12 13:37:25.921060 containerd[1525]: time="2025-05-12T13:37:25.921029000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 12 13:37:25.921105 containerd[1525]: time="2025-05-12T13:37:25.921088560Z" level=info msg=serving... address=/run/containerd/containerd.sock May 12 13:37:25.922624 containerd[1525]: time="2025-05-12T13:37:25.921136880Z" level=info msg="containerd successfully booted in 0.134958s" May 12 13:37:25.921230 systemd[1]: Started containerd.service - containerd container runtime. May 12 13:37:25.961947 tar[1516]: linux-arm64/README.md May 12 13:37:25.980327 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 12 13:37:26.762218 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 12 13:37:26.779842 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 12 13:37:26.782442 systemd[1]: Starting issuegen.service - Generate /run/issue... May 12 13:37:26.804938 systemd[1]: issuegen.service: Deactivated successfully. May 12 13:37:26.805190 systemd[1]: Finished issuegen.service - Generate /run/issue. May 12 13:37:26.807557 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 12 13:37:26.828110 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 12 13:37:26.830688 systemd[1]: Started getty@tty1.service - Getty on tty1. May 12 13:37:26.832791 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 12 13:37:26.834168 systemd[1]: Reached target getty.target - Login Prompts. May 12 13:37:27.194163 systemd-networkd[1429]: eth0: Gained IPv6LL May 12 13:37:27.196407 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 12 13:37:27.198274 systemd[1]: Reached target network-online.target - Network is Online. May 12 13:37:27.200807 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 12 13:37:27.203177 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:27.213850 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 12 13:37:27.235319 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 12 13:37:27.236856 systemd[1]: coreos-metadata.service: Deactivated successfully. May 12 13:37:27.237074 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 12 13:37:27.239606 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 12 13:37:27.707491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:27.708959 systemd[1]: Reached target multi-user.target - Multi-User System. May 12 13:37:27.710743 (kubelet)[1622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:37:27.710802 systemd[1]: Startup finished in 2.153s (kernel) + 6.047s (initrd) + 3.996s (userspace) = 12.196s. May 12 13:37:28.079725 kubelet[1622]: E0512 13:37:28.079594 1622 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:37:28.081754 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:37:28.081894 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:37:28.082199 systemd[1]: kubelet.service: Consumed 774ms CPU time, 249.7M memory peak. May 12 13:37:30.883382 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 12 13:37:30.884432 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:35654.service - OpenSSH per-connection server daemon (10.0.0.1:35654). May 12 13:37:30.945332 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 35654 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:30.946783 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:30.952353 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 12 13:37:30.953209 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 12 13:37:30.958135 systemd-logind[1500]: New session 1 of user core. May 12 13:37:30.973620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 12 13:37:30.977300 systemd[1]: Starting user@500.service - User Manager for UID 500... May 12 13:37:30.990960 (systemd)[1639]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 12 13:37:30.993115 systemd-logind[1500]: New session c1 of user core. May 12 13:37:31.105450 systemd[1639]: Queued start job for default target default.target. May 12 13:37:31.116905 systemd[1639]: Created slice app.slice - User Application Slice. May 12 13:37:31.116933 systemd[1639]: Reached target paths.target - Paths. May 12 13:37:31.116969 systemd[1639]: Reached target timers.target - Timers. May 12 13:37:31.118123 systemd[1639]: Starting dbus.socket - D-Bus User Message Bus Socket... May 12 13:37:31.126552 systemd[1639]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 12 13:37:31.126609 systemd[1639]: Reached target sockets.target - Sockets. May 12 13:37:31.126646 systemd[1639]: Reached target basic.target - Basic System. May 12 13:37:31.126673 systemd[1639]: Reached target default.target - Main User Target. May 12 13:37:31.126698 systemd[1639]: Startup finished in 128ms. May 12 13:37:31.126836 systemd[1]: Started user@500.service - User Manager for UID 500. May 12 13:37:31.135177 systemd[1]: Started session-1.scope - Session 1 of User core. May 12 13:37:31.193137 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:35662.service - OpenSSH per-connection server daemon (10.0.0.1:35662). May 12 13:37:31.243768 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 35662 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:31.244850 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:31.249090 systemd-logind[1500]: New session 2 of user core. May 12 13:37:31.259232 systemd[1]: Started session-2.scope - Session 2 of User core. May 12 13:37:31.308522 sshd[1652]: Connection closed by 10.0.0.1 port 35662 May 12 13:37:31.308806 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 12 13:37:31.320856 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:35662.service: Deactivated successfully. May 12 13:37:31.322175 systemd[1]: session-2.scope: Deactivated successfully. May 12 13:37:31.323377 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. May 12 13:37:31.324374 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:35668.service - OpenSSH per-connection server daemon (10.0.0.1:35668). May 12 13:37:31.325266 systemd-logind[1500]: Removed session 2. May 12 13:37:31.377637 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 35668 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:31.378673 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:31.382958 systemd-logind[1500]: New session 3 of user core. May 12 13:37:31.396656 systemd[1]: Started session-3.scope - Session 3 of User core. May 12 13:37:31.443168 sshd[1660]: Connection closed by 10.0.0.1 port 35668 May 12 13:37:31.443442 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 12 13:37:31.459047 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:35668.service: Deactivated successfully. May 12 13:37:31.460392 systemd[1]: session-3.scope: Deactivated successfully. May 12 13:37:31.462211 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. May 12 13:37:31.462711 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:35684.service - OpenSSH per-connection server daemon (10.0.0.1:35684). May 12 13:37:31.463592 systemd-logind[1500]: Removed session 3. May 12 13:37:31.504654 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 35684 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:31.505708 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:31.510211 systemd-logind[1500]: New session 4 of user core. May 12 13:37:31.525184 systemd[1]: Started session-4.scope - Session 4 of User core. May 12 13:37:31.574998 sshd[1668]: Connection closed by 10.0.0.1 port 35684 May 12 13:37:31.575434 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 12 13:37:31.587951 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:35684.service: Deactivated successfully. May 12 13:37:31.589247 systemd[1]: session-4.scope: Deactivated successfully. May 12 13:37:31.591818 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. May 12 13:37:31.594220 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:35686.service - OpenSSH per-connection server daemon (10.0.0.1:35686). May 12 13:37:31.595246 systemd-logind[1500]: Removed session 4. May 12 13:37:31.647611 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 35686 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:31.648610 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:31.653094 systemd-logind[1500]: New session 5 of user core. May 12 13:37:31.665168 systemd[1]: Started session-5.scope - Session 5 of User core. May 12 13:37:31.722713 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 12 13:37:31.722978 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:37:31.742844 sudo[1677]: pam_unix(sudo:session): session closed for user root May 12 13:37:31.744047 sshd[1676]: Connection closed by 10.0.0.1 port 35686 May 12 13:37:31.744367 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 12 13:37:31.754951 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:35686.service: Deactivated successfully. May 12 13:37:31.756251 systemd[1]: session-5.scope: Deactivated successfully. May 12 13:37:31.756940 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. May 12 13:37:31.758558 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:35696.service - OpenSSH per-connection server daemon (10.0.0.1:35696). May 12 13:37:31.759190 systemd-logind[1500]: Removed session 5. May 12 13:37:31.806614 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 35696 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:31.807701 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:31.811019 systemd-logind[1500]: New session 6 of user core. May 12 13:37:31.823222 systemd[1]: Started session-6.scope - Session 6 of User core. May 12 13:37:31.871339 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 12 13:37:31.871594 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:37:31.874275 sudo[1687]: pam_unix(sudo:session): session closed for user root May 12 13:37:31.878214 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 12 13:37:31.878467 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:37:31.885540 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 12 13:37:31.917956 augenrules[1709]: No rules May 12 13:37:31.918912 systemd[1]: audit-rules.service: Deactivated successfully. May 12 13:37:31.919872 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 12 13:37:31.920608 sudo[1686]: pam_unix(sudo:session): session closed for user root May 12 13:37:31.922094 sshd[1685]: Connection closed by 10.0.0.1 port 35696 May 12 13:37:31.921970 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 12 13:37:31.931311 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:35696.service: Deactivated successfully. May 12 13:37:31.932615 systemd[1]: session-6.scope: Deactivated successfully. May 12 13:37:31.934297 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. May 12 13:37:31.935716 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:35702.service - OpenSSH per-connection server daemon (10.0.0.1:35702). May 12 13:37:31.936457 systemd-logind[1500]: Removed session 6. May 12 13:37:31.994653 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 35702 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:37:31.995680 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:37:31.999705 systemd-logind[1500]: New session 7 of user core. May 12 13:37:32.010244 systemd[1]: Started session-7.scope - Session 7 of User core. May 12 13:37:32.059214 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 12 13:37:32.059449 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 12 13:37:32.416899 systemd[1]: Starting docker.service - Docker Application Container Engine... May 12 13:37:32.432361 (dockerd)[1741]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 12 13:37:32.680378 dockerd[1741]: time="2025-05-12T13:37:32.680263709Z" level=info msg="Starting up" May 12 13:37:32.681968 dockerd[1741]: time="2025-05-12T13:37:32.681931936Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 12 13:37:32.785204 dockerd[1741]: time="2025-05-12T13:37:32.785161099Z" level=info msg="Loading containers: start." May 12 13:37:32.792062 kernel: Initializing XFRM netlink socket May 12 13:37:32.965857 systemd-networkd[1429]: docker0: Link UP May 12 13:37:32.968789 dockerd[1741]: time="2025-05-12T13:37:32.968757094Z" level=info msg="Loading containers: done." May 12 13:37:32.981593 dockerd[1741]: time="2025-05-12T13:37:32.981556294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 12 13:37:32.981710 dockerd[1741]: time="2025-05-12T13:37:32.981626706Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 12 13:37:32.981734 dockerd[1741]: time="2025-05-12T13:37:32.981717763Z" level=info msg="Initializing buildkit" May 12 13:37:33.000456 dockerd[1741]: time="2025-05-12T13:37:33.000431861Z" level=info msg="Completed buildkit initialization" May 12 13:37:33.007469 dockerd[1741]: time="2025-05-12T13:37:33.007436895Z" level=info msg="Daemon has completed initialization" May 12 13:37:33.007543 dockerd[1741]: time="2025-05-12T13:37:33.007483280Z" level=info msg="API listen on /run/docker.sock" May 12 13:37:33.007510 systemd[1]: Started docker.service - Docker Application Container Engine. May 12 13:37:33.756507 containerd[1525]: time="2025-05-12T13:37:33.756450726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 12 13:37:34.412892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1000556521.mount: Deactivated successfully. May 12 13:37:35.360285 containerd[1525]: time="2025-05-12T13:37:35.360226353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:35.360772 containerd[1525]: time="2025-05-12T13:37:35.360735544Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 12 13:37:35.361397 containerd[1525]: time="2025-05-12T13:37:35.361338777Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:35.365337 containerd[1525]: time="2025-05-12T13:37:35.365257955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:35.366253 containerd[1525]: time="2025-05-12T13:37:35.366227164Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.609736943s" May 12 13:37:35.366294 containerd[1525]: time="2025-05-12T13:37:35.366262198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 12 13:37:35.366876 containerd[1525]: time="2025-05-12T13:37:35.366809385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 12 13:37:36.497534 containerd[1525]: time="2025-05-12T13:37:36.497488526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:36.498998 containerd[1525]: time="2025-05-12T13:37:36.498884466Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 12 13:37:36.500082 containerd[1525]: time="2025-05-12T13:37:36.499914677Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:36.502359 containerd[1525]: time="2025-05-12T13:37:36.502316024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:36.502967 containerd[1525]: time="2025-05-12T13:37:36.502832435Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.13599468s" May 12 13:37:36.502967 containerd[1525]: time="2025-05-12T13:37:36.502861314Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 12 13:37:36.503632 containerd[1525]: time="2025-05-12T13:37:36.503451306Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 12 13:37:37.634782 containerd[1525]: time="2025-05-12T13:37:37.634707253Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:37.635601 containerd[1525]: time="2025-05-12T13:37:37.635173953Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 12 13:37:37.636470 containerd[1525]: time="2025-05-12T13:37:37.636435086Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:37.638860 containerd[1525]: time="2025-05-12T13:37:37.638834387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:37.639958 containerd[1525]: time="2025-05-12T13:37:37.639873638Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.136388178s" May 12 13:37:37.639958 containerd[1525]: time="2025-05-12T13:37:37.639906478Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 12 13:37:37.640543 containerd[1525]: time="2025-05-12T13:37:37.640517812Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 12 13:37:38.333581 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 12 13:37:38.336022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:38.448262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:38.451884 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 12 13:37:38.492185 kubelet[2022]: E0512 13:37:38.492137 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 12 13:37:38.495630 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 12 13:37:38.495845 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 12 13:37:38.497205 systemd[1]: kubelet.service: Consumed 137ms CPU time, 102.6M memory peak. May 12 13:37:38.714446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341261275.mount: Deactivated successfully. May 12 13:37:39.068280 containerd[1525]: time="2025-05-12T13:37:39.068161944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:39.069153 containerd[1525]: time="2025-05-12T13:37:39.069062374Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 12 13:37:39.069969 containerd[1525]: time="2025-05-12T13:37:39.069907494Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:39.071887 containerd[1525]: time="2025-05-12T13:37:39.071827745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:39.072515 containerd[1525]: time="2025-05-12T13:37:39.072461079Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.431909495s" May 12 13:37:39.072515 containerd[1525]: time="2025-05-12T13:37:39.072495186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 12 13:37:39.073232 containerd[1525]: time="2025-05-12T13:37:39.073094810Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 12 13:37:39.647771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838462866.mount: Deactivated successfully. May 12 13:37:40.283982 containerd[1525]: time="2025-05-12T13:37:40.283927846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:40.284383 containerd[1525]: time="2025-05-12T13:37:40.284351892Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 12 13:37:40.285259 containerd[1525]: time="2025-05-12T13:37:40.285238850Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:40.287724 containerd[1525]: time="2025-05-12T13:37:40.287671767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:40.288791 containerd[1525]: time="2025-05-12T13:37:40.288737246Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.215609473s" May 12 13:37:40.288791 containerd[1525]: time="2025-05-12T13:37:40.288768364Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 12 13:37:40.289259 containerd[1525]: time="2025-05-12T13:37:40.289235291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 12 13:37:40.753412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1843230551.mount: Deactivated successfully. May 12 13:37:40.757450 containerd[1525]: time="2025-05-12T13:37:40.756945947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:37:40.758014 containerd[1525]: time="2025-05-12T13:37:40.757988256Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 12 13:37:40.758964 containerd[1525]: time="2025-05-12T13:37:40.758900928Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:37:40.761163 containerd[1525]: time="2025-05-12T13:37:40.761104415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 12 13:37:40.761918 containerd[1525]: time="2025-05-12T13:37:40.761778834Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 472.511312ms" May 12 13:37:40.761918 containerd[1525]: time="2025-05-12T13:37:40.761824100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 12 13:37:40.762576 containerd[1525]: time="2025-05-12T13:37:40.762387281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 12 13:37:41.322932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618647416.mount: Deactivated successfully. May 12 13:37:43.199181 containerd[1525]: time="2025-05-12T13:37:43.199113874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:43.333062 containerd[1525]: time="2025-05-12T13:37:43.332983572Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 12 13:37:43.334254 containerd[1525]: time="2025-05-12T13:37:43.334195749Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:43.337488 containerd[1525]: time="2025-05-12T13:37:43.337426480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:37:43.339166 containerd[1525]: time="2025-05-12T13:37:43.339132273Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.576712982s" May 12 13:37:43.339232 containerd[1525]: time="2025-05-12T13:37:43.339166723Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 12 13:37:47.784979 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:47.785140 systemd[1]: kubelet.service: Consumed 137ms CPU time, 102.6M memory peak. May 12 13:37:47.786976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:47.805350 systemd[1]: Reload requested from client PID 2181 ('systemctl') (unit session-7.scope)... May 12 13:37:47.805368 systemd[1]: Reloading... May 12 13:37:47.878058 zram_generator::config[2221]: No configuration found. May 12 13:37:47.976996 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:37:48.060253 systemd[1]: Reloading finished in 254 ms. May 12 13:37:48.117753 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:48.120676 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:48.120908 systemd[1]: kubelet.service: Deactivated successfully. May 12 13:37:48.121126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:48.121161 systemd[1]: kubelet.service: Consumed 84ms CPU time, 90.2M memory peak. May 12 13:37:48.122449 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:48.219181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:48.221745 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 13:37:48.256806 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:37:48.256806 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 12 13:37:48.256806 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:37:48.256806 kubelet[2271]: I0512 13:37:48.256723 2271 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 13:37:48.820494 kubelet[2271]: I0512 13:37:48.820451 2271 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 12 13:37:48.820494 kubelet[2271]: I0512 13:37:48.820485 2271 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 13:37:48.820773 kubelet[2271]: I0512 13:37:48.820742 2271 server.go:954] "Client rotation is on, will bootstrap in background" May 12 13:37:48.848773 kubelet[2271]: E0512 13:37:48.848723 2271 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" May 12 13:37:48.850074 kubelet[2271]: I0512 13:37:48.850052 2271 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:37:48.857844 kubelet[2271]: I0512 13:37:48.857819 2271 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 12 13:37:48.860448 kubelet[2271]: I0512 13:37:48.860422 2271 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 13:37:48.861245 kubelet[2271]: I0512 13:37:48.861203 2271 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 13:37:48.861398 kubelet[2271]: I0512 13:37:48.861241 2271 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 12 13:37:48.861479 kubelet[2271]: I0512 13:37:48.861465 2271 topology_manager.go:138] "Creating topology manager with none policy" May 12 13:37:48.861479 kubelet[2271]: I0512 13:37:48.861474 2271 container_manager_linux.go:304] "Creating device plugin manager" May 12 13:37:48.861681 kubelet[2271]: I0512 13:37:48.861655 2271 state_mem.go:36] "Initialized new in-memory state store" May 12 13:37:48.864729 kubelet[2271]: I0512 13:37:48.864703 2271 kubelet.go:446] "Attempting to sync node with API server" May 12 13:37:48.864753 kubelet[2271]: I0512 13:37:48.864730 2271 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 13:37:48.864778 kubelet[2271]: I0512 13:37:48.864754 2271 kubelet.go:352] "Adding apiserver pod source" May 12 13:37:48.864778 kubelet[2271]: I0512 13:37:48.864770 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 13:37:48.869858 kubelet[2271]: W0512 13:37:48.869657 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused May 12 13:37:48.869858 kubelet[2271]: E0512 13:37:48.869721 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" May 12 13:37:48.869858 kubelet[2271]: W0512 13:37:48.869797 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused May 12 13:37:48.869858 kubelet[2271]: E0512 13:37:48.869822 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" May 12 13:37:48.871860 kubelet[2271]: I0512 13:37:48.871738 2271 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 13:37:48.873302 kubelet[2271]: I0512 13:37:48.873246 2271 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 13:37:48.873379 kubelet[2271]: W0512 13:37:48.873365 2271 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 12 13:37:48.874245 kubelet[2271]: I0512 13:37:48.874214 2271 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 12 13:37:48.874245 kubelet[2271]: I0512 13:37:48.874250 2271 server.go:1287] "Started kubelet" May 12 13:37:48.875289 kubelet[2271]: I0512 13:37:48.874362 2271 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 12 13:37:48.875289 kubelet[2271]: I0512 13:37:48.875199 2271 server.go:490] "Adding debug handlers to kubelet server" May 12 13:37:48.876161 kubelet[2271]: I0512 13:37:48.876005 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 13:37:48.876625 kubelet[2271]: I0512 13:37:48.876489 2271 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 13:37:48.876625 kubelet[2271]: I0512 13:37:48.876552 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 13:37:48.881071 kubelet[2271]: I0512 13:37:48.879300 2271 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 12 13:37:48.882084 kubelet[2271]: E0512 13:37:48.877355 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ecb25d8956652 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 13:37:48.87422933 +0000 UTC m=+0.649246370,LastTimestamp:2025-05-12 13:37:48.87422933 +0000 UTC m=+0.649246370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 13:37:48.882084 kubelet[2271]: I0512 13:37:48.881603 2271 volume_manager.go:297] "Starting Kubelet Volume Manager" May 12 13:37:48.882084 kubelet[2271]: I0512 13:37:48.881673 2271 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 13:37:48.882084 kubelet[2271]: I0512 13:37:48.881716 2271 reconciler.go:26] "Reconciler: start to sync state" May 12 13:37:48.882084 kubelet[2271]: E0512 13:37:48.881936 2271 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:37:48.882084 kubelet[2271]: W0512 13:37:48.881981 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused May 12 13:37:48.882084 kubelet[2271]: E0512 13:37:48.882017 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" May 12 13:37:48.882717 kubelet[2271]: E0512 13:37:48.882466 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" May 12 13:37:48.884059 kubelet[2271]: E0512 13:37:48.884010 2271 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 13:37:48.884059 kubelet[2271]: I0512 13:37:48.884042 2271 factory.go:221] Registration of the containerd container factory successfully May 12 13:37:48.884059 kubelet[2271]: I0512 13:37:48.884055 2271 factory.go:221] Registration of the systemd container factory successfully May 12 13:37:48.884164 kubelet[2271]: I0512 13:37:48.884121 2271 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 13:37:48.893747 kubelet[2271]: I0512 13:37:48.893691 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 13:37:48.894705 kubelet[2271]: I0512 13:37:48.894677 2271 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 13:37:48.894705 kubelet[2271]: I0512 13:37:48.894704 2271 status_manager.go:227] "Starting to sync pod status with apiserver" May 12 13:37:48.894759 kubelet[2271]: I0512 13:37:48.894723 2271 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 12 13:37:48.894759 kubelet[2271]: I0512 13:37:48.894729 2271 kubelet.go:2388] "Starting kubelet main sync loop" May 12 13:37:48.894798 kubelet[2271]: E0512 13:37:48.894764 2271 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 13:37:48.897908 kubelet[2271]: W0512 13:37:48.897861 2271 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused May 12 13:37:48.897981 kubelet[2271]: E0512 13:37:48.897910 2271 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" May 12 13:37:48.898011 kubelet[2271]: E0512 13:37:48.897954 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183ecb25d8956652 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-12 13:37:48.87422933 +0000 UTC m=+0.649246370,LastTimestamp:2025-05-12 13:37:48.87422933 +0000 UTC m=+0.649246370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 12 13:37:48.898682 kubelet[2271]: I0512 13:37:48.898645 2271 cpu_manager.go:221] "Starting CPU manager" policy="none" May 12 13:37:48.898682 kubelet[2271]: I0512 13:37:48.898658 2271 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 12 13:37:48.898682 kubelet[2271]: I0512 13:37:48.898673 2271 state_mem.go:36] "Initialized new in-memory state store" May 12 13:37:48.932644 kubelet[2271]: I0512 13:37:48.932617 2271 policy_none.go:49] "None policy: Start" May 12 13:37:48.932644 kubelet[2271]: I0512 13:37:48.932644 2271 memory_manager.go:186] "Starting memorymanager" policy="None" May 12 13:37:48.932783 kubelet[2271]: I0512 13:37:48.932657 2271 state_mem.go:35] "Initializing new in-memory state store" May 12 13:37:48.938196 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 12 13:37:48.955033 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 12 13:37:48.957757 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 12 13:37:48.968921 kubelet[2271]: I0512 13:37:48.968892 2271 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 13:37:48.969121 kubelet[2271]: I0512 13:37:48.969105 2271 eviction_manager.go:189] "Eviction manager: starting control loop" May 12 13:37:48.969157 kubelet[2271]: I0512 13:37:48.969119 2271 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 13:37:48.969559 kubelet[2271]: I0512 13:37:48.969347 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 13:37:48.970410 kubelet[2271]: E0512 13:37:48.970316 2271 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 12 13:37:48.970410 kubelet[2271]: E0512 13:37:48.970358 2271 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 12 13:37:49.002209 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 12 13:37:49.022589 kubelet[2271]: E0512 13:37:49.022510 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:49.025301 systemd[1]: Created slice kubepods-burstable-pod1b5afee2550024b6eaedcf64d898190f.slice - libcontainer container kubepods-burstable-pod1b5afee2550024b6eaedcf64d898190f.slice. May 12 13:37:49.039948 kubelet[2271]: E0512 13:37:49.039920 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:49.041973 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 12 13:37:49.043234 kubelet[2271]: E0512 13:37:49.043217 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:49.071416 kubelet[2271]: I0512 13:37:49.071325 2271 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 13:37:49.076709 kubelet[2271]: E0512 13:37:49.076661 2271 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" May 12 13:37:49.083046 kubelet[2271]: I0512 13:37:49.083010 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b5afee2550024b6eaedcf64d898190f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5afee2550024b6eaedcf64d898190f\") " pod="kube-system/kube-apiserver-localhost" May 12 13:37:49.083115 kubelet[2271]: I0512 13:37:49.083059 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:49.083115 kubelet[2271]: I0512 13:37:49.083084 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:49.083115 kubelet[2271]: E0512 13:37:49.083083 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" May 12 13:37:49.083115 kubelet[2271]: I0512 13:37:49.083101 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:49.083214 kubelet[2271]: I0512 13:37:49.083118 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 12 13:37:49.083214 kubelet[2271]: I0512 13:37:49.083135 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b5afee2550024b6eaedcf64d898190f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5afee2550024b6eaedcf64d898190f\") " pod="kube-system/kube-apiserver-localhost" May 12 13:37:49.083214 kubelet[2271]: I0512 13:37:49.083150 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b5afee2550024b6eaedcf64d898190f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b5afee2550024b6eaedcf64d898190f\") " pod="kube-system/kube-apiserver-localhost" May 12 13:37:49.083214 kubelet[2271]: I0512 13:37:49.083173 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:49.083214 kubelet[2271]: I0512 13:37:49.083190 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:49.277846 kubelet[2271]: I0512 13:37:49.277812 2271 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 13:37:49.278135 kubelet[2271]: E0512 13:37:49.278061 2271 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" May 12 13:37:49.323616 kubelet[2271]: E0512 13:37:49.323538 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.324202 containerd[1525]: time="2025-05-12T13:37:49.324155229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 12 13:37:49.340831 kubelet[2271]: E0512 13:37:49.340803 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.341467 containerd[1525]: time="2025-05-12T13:37:49.341348842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b5afee2550024b6eaedcf64d898190f,Namespace:kube-system,Attempt:0,}" May 12 13:37:49.343513 kubelet[2271]: E0512 13:37:49.343491 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.344094 containerd[1525]: time="2025-05-12T13:37:49.343677316Z" level=info msg="connecting to shim fc9bdbfd079af3dcd8898605723b7eed6ed05711c7abd3c6090af181f6154d8a" address="unix:///run/containerd/s/763ab5d8a3a6f11f41bdc05890dd5825107ba51b20bcc38be231c69f1ff3c9d1" namespace=k8s.io protocol=ttrpc version=3 May 12 13:37:49.344291 containerd[1525]: time="2025-05-12T13:37:49.344258146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 12 13:37:49.369180 containerd[1525]: time="2025-05-12T13:37:49.369090868Z" level=info msg="connecting to shim 751ddc5c518d6fac0fdb86a72ea45645eb64d9b0fc945cb20fe885d2281a326d" address="unix:///run/containerd/s/b604aade09af2ae4e2858df6de2cdb724947cba59a826fd72cdea68b18c95d6a" namespace=k8s.io protocol=ttrpc version=3 May 12 13:37:49.370698 containerd[1525]: time="2025-05-12T13:37:49.370662971Z" level=info msg="connecting to shim 1b882ebf2136d6fb0de8ba2c90b8a4897c1cb1ec71b79f3b13fb1e3cefd27e1f" address="unix:///run/containerd/s/e859e7df1bfbd2982b100dfbd1c87473f2b4a539bf6dd25da071793e4e67cd95" namespace=k8s.io protocol=ttrpc version=3 May 12 13:37:49.397190 systemd[1]: Started cri-containerd-751ddc5c518d6fac0fdb86a72ea45645eb64d9b0fc945cb20fe885d2281a326d.scope - libcontainer container 751ddc5c518d6fac0fdb86a72ea45645eb64d9b0fc945cb20fe885d2281a326d. May 12 13:37:49.398190 systemd[1]: Started cri-containerd-fc9bdbfd079af3dcd8898605723b7eed6ed05711c7abd3c6090af181f6154d8a.scope - libcontainer container fc9bdbfd079af3dcd8898605723b7eed6ed05711c7abd3c6090af181f6154d8a. May 12 13:37:49.401310 systemd[1]: Started cri-containerd-1b882ebf2136d6fb0de8ba2c90b8a4897c1cb1ec71b79f3b13fb1e3cefd27e1f.scope - libcontainer container 1b882ebf2136d6fb0de8ba2c90b8a4897c1cb1ec71b79f3b13fb1e3cefd27e1f. May 12 13:37:49.434849 containerd[1525]: time="2025-05-12T13:37:49.434761355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b5afee2550024b6eaedcf64d898190f,Namespace:kube-system,Attempt:0,} returns sandbox id \"751ddc5c518d6fac0fdb86a72ea45645eb64d9b0fc945cb20fe885d2281a326d\"" May 12 13:37:49.436615 kubelet[2271]: E0512 13:37:49.436582 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.439251 containerd[1525]: time="2025-05-12T13:37:49.439211862Z" level=info msg="CreateContainer within sandbox \"751ddc5c518d6fac0fdb86a72ea45645eb64d9b0fc945cb20fe885d2281a326d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 12 13:37:49.439909 containerd[1525]: time="2025-05-12T13:37:49.439886351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc9bdbfd079af3dcd8898605723b7eed6ed05711c7abd3c6090af181f6154d8a\"" May 12 13:37:49.441402 kubelet[2271]: E0512 13:37:49.441379 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.442966 containerd[1525]: time="2025-05-12T13:37:49.442931710Z" level=info msg="CreateContainer within sandbox \"fc9bdbfd079af3dcd8898605723b7eed6ed05711c7abd3c6090af181f6154d8a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 12 13:37:49.443862 containerd[1525]: time="2025-05-12T13:37:49.443834994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b882ebf2136d6fb0de8ba2c90b8a4897c1cb1ec71b79f3b13fb1e3cefd27e1f\"" May 12 13:37:49.444400 kubelet[2271]: E0512 13:37:49.444371 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.445661 containerd[1525]: time="2025-05-12T13:37:49.445609824Z" level=info msg="CreateContainer within sandbox \"1b882ebf2136d6fb0de8ba2c90b8a4897c1cb1ec71b79f3b13fb1e3cefd27e1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 12 13:37:49.449600 containerd[1525]: time="2025-05-12T13:37:49.449526489Z" level=info msg="Container 4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9: CDI devices from CRI Config.CDIDevices: []" May 12 13:37:49.451685 containerd[1525]: time="2025-05-12T13:37:49.451658664Z" level=info msg="Container 59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d: CDI devices from CRI Config.CDIDevices: []" May 12 13:37:49.452825 containerd[1525]: time="2025-05-12T13:37:49.452800963Z" level=info msg="Container 5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3: CDI devices from CRI Config.CDIDevices: []" May 12 13:37:49.458146 containerd[1525]: time="2025-05-12T13:37:49.458115990Z" level=info msg="CreateContainer within sandbox \"751ddc5c518d6fac0fdb86a72ea45645eb64d9b0fc945cb20fe885d2281a326d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9\"" May 12 13:37:49.459088 containerd[1525]: time="2025-05-12T13:37:49.458602005Z" level=info msg="StartContainer for \"4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9\"" May 12 13:37:49.459812 containerd[1525]: time="2025-05-12T13:37:49.459598827Z" level=info msg="connecting to shim 4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9" address="unix:///run/containerd/s/b604aade09af2ae4e2858df6de2cdb724947cba59a826fd72cdea68b18c95d6a" protocol=ttrpc version=3 May 12 13:37:49.460859 containerd[1525]: time="2025-05-12T13:37:49.460830273Z" level=info msg="CreateContainer within sandbox \"fc9bdbfd079af3dcd8898605723b7eed6ed05711c7abd3c6090af181f6154d8a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d\"" May 12 13:37:49.461237 containerd[1525]: time="2025-05-12T13:37:49.461214246Z" level=info msg="StartContainer for \"59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d\"" May 12 13:37:49.461626 containerd[1525]: time="2025-05-12T13:37:49.461585125Z" level=info msg="CreateContainer within sandbox \"1b882ebf2136d6fb0de8ba2c90b8a4897c1cb1ec71b79f3b13fb1e3cefd27e1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3\"" May 12 13:37:49.462011 containerd[1525]: time="2025-05-12T13:37:49.461976245Z" level=info msg="StartContainer for \"5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3\"" May 12 13:37:49.462411 containerd[1525]: time="2025-05-12T13:37:49.462379341Z" level=info msg="connecting to shim 59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d" address="unix:///run/containerd/s/763ab5d8a3a6f11f41bdc05890dd5825107ba51b20bcc38be231c69f1ff3c9d1" protocol=ttrpc version=3 May 12 13:37:49.462923 containerd[1525]: time="2025-05-12T13:37:49.462887114Z" level=info msg="connecting to shim 5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3" address="unix:///run/containerd/s/e859e7df1bfbd2982b100dfbd1c87473f2b4a539bf6dd25da071793e4e67cd95" protocol=ttrpc version=3 May 12 13:37:49.477189 systemd[1]: Started cri-containerd-4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9.scope - libcontainer container 4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9. May 12 13:37:49.483685 kubelet[2271]: E0512 13:37:49.483640 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" May 12 13:37:49.490185 systemd[1]: Started cri-containerd-59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d.scope - libcontainer container 59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d. May 12 13:37:49.491797 systemd[1]: Started cri-containerd-5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3.scope - libcontainer container 5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3. May 12 13:37:49.526801 containerd[1525]: time="2025-05-12T13:37:49.525892462Z" level=info msg="StartContainer for \"4977c4892c61d6869115430fa21c8a683bfed2c217e449a2310c7590804cf7a9\" returns successfully" May 12 13:37:49.537175 containerd[1525]: time="2025-05-12T13:37:49.534877274Z" level=info msg="StartContainer for \"5f05bb500549aebcf7f3227d94538aaf045b1296d061405e573ebed31f35e3b3\" returns successfully" May 12 13:37:49.542969 containerd[1525]: time="2025-05-12T13:37:49.540539945Z" level=info msg="StartContainer for \"59e4056bea60db1b8b1ad29bbf8abc03e27793bf5a25c8e0d47b9f65c93f8d6d\" returns successfully" May 12 13:37:49.679578 kubelet[2271]: I0512 13:37:49.679475 2271 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 13:37:49.680483 kubelet[2271]: E0512 13:37:49.679813 2271 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" May 12 13:37:49.905073 kubelet[2271]: E0512 13:37:49.905024 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:49.905199 kubelet[2271]: E0512 13:37:49.905163 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.909079 kubelet[2271]: E0512 13:37:49.909032 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:49.909167 kubelet[2271]: E0512 13:37:49.909149 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:49.911283 kubelet[2271]: E0512 13:37:49.911254 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:49.911440 kubelet[2271]: E0512 13:37:49.911420 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:50.481633 kubelet[2271]: I0512 13:37:50.481405 2271 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 13:37:50.915330 kubelet[2271]: E0512 13:37:50.914458 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:50.915330 kubelet[2271]: E0512 13:37:50.914569 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:50.915330 kubelet[2271]: E0512 13:37:50.914583 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:50.915330 kubelet[2271]: E0512 13:37:50.915158 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:50.917324 kubelet[2271]: E0512 13:37:50.917287 2271 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 12 13:37:50.917401 kubelet[2271]: E0512 13:37:50.917384 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:51.463222 kubelet[2271]: E0512 13:37:51.463179 2271 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 12 13:37:51.502340 kubelet[2271]: I0512 13:37:51.502289 2271 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 12 13:37:51.583179 kubelet[2271]: I0512 13:37:51.582769 2271 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 12 13:37:51.596530 kubelet[2271]: E0512 13:37:51.596476 2271 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 12 13:37:51.596530 kubelet[2271]: I0512 13:37:51.596509 2271 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 12 13:37:51.599169 kubelet[2271]: E0512 13:37:51.599131 2271 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 12 13:37:51.599169 kubelet[2271]: I0512 13:37:51.599155 2271 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 12 13:37:51.601156 kubelet[2271]: E0512 13:37:51.601120 2271 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 12 13:37:51.866147 kubelet[2271]: I0512 13:37:51.866032 2271 apiserver.go:52] "Watching apiserver" May 12 13:37:51.881930 kubelet[2271]: I0512 13:37:51.881888 2271 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 13:37:51.913573 kubelet[2271]: I0512 13:37:51.913547 2271 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 12 13:37:51.915280 kubelet[2271]: E0512 13:37:51.915248 2271 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 12 13:37:51.915412 kubelet[2271]: E0512 13:37:51.915388 2271 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:53.193905 systemd[1]: Reload requested from client PID 2541 ('systemctl') (unit session-7.scope)... May 12 13:37:53.193919 systemd[1]: Reloading... May 12 13:37:53.259072 zram_generator::config[2585]: No configuration found. May 12 13:37:53.328104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 12 13:37:53.428492 systemd[1]: Reloading finished in 234 ms. May 12 13:37:53.447715 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:53.466108 systemd[1]: kubelet.service: Deactivated successfully. May 12 13:37:53.467141 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:53.467260 systemd[1]: kubelet.service: Consumed 1.031s CPU time, 126M memory peak. May 12 13:37:53.469059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 12 13:37:53.595500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 12 13:37:53.614384 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 12 13:37:53.647579 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:37:53.647579 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 12 13:37:53.647579 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 12 13:37:53.647914 kubelet[2626]: I0512 13:37:53.647853 2626 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 12 13:37:53.654060 kubelet[2626]: I0512 13:37:53.653895 2626 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 12 13:37:53.654060 kubelet[2626]: I0512 13:37:53.653924 2626 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 12 13:37:53.654217 kubelet[2626]: I0512 13:37:53.654186 2626 server.go:954] "Client rotation is on, will bootstrap in background" May 12 13:37:53.655371 kubelet[2626]: I0512 13:37:53.655350 2626 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 12 13:37:53.657739 kubelet[2626]: I0512 13:37:53.657706 2626 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 12 13:37:53.661185 kubelet[2626]: I0512 13:37:53.661150 2626 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 12 13:37:53.664024 kubelet[2626]: I0512 13:37:53.663812 2626 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 12 13:37:53.664110 kubelet[2626]: I0512 13:37:53.664059 2626 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 12 13:37:53.665302 kubelet[2626]: I0512 13:37:53.664083 2626 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 12 13:37:53.665545 kubelet[2626]: I0512 13:37:53.665453 2626 topology_manager.go:138] "Creating topology manager with none policy" May 12 13:37:53.665545 kubelet[2626]: I0512 13:37:53.665470 2626 container_manager_linux.go:304] "Creating device plugin manager" May 12 13:37:53.665635 kubelet[2626]: I0512 13:37:53.665516 2626 state_mem.go:36] "Initialized new in-memory state store" May 12 13:37:53.665828 kubelet[2626]: I0512 13:37:53.665817 2626 kubelet.go:446] "Attempting to sync node with API server" May 12 13:37:53.665943 kubelet[2626]: I0512 13:37:53.665932 2626 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 12 13:37:53.666049 kubelet[2626]: I0512 13:37:53.666025 2626 kubelet.go:352] "Adding apiserver pod source" May 12 13:37:53.666805 kubelet[2626]: I0512 13:37:53.666787 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 12 13:37:53.668076 kubelet[2626]: I0512 13:37:53.667528 2626 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 12 13:37:53.668076 kubelet[2626]: I0512 13:37:53.667968 2626 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 12 13:37:53.668499 kubelet[2626]: I0512 13:37:53.668480 2626 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 12 13:37:53.668584 kubelet[2626]: I0512 13:37:53.668574 2626 server.go:1287] "Started kubelet" May 12 13:37:53.668835 kubelet[2626]: I0512 13:37:53.668793 2626 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 12 13:37:53.669028 kubelet[2626]: I0512 13:37:53.668987 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 12 13:37:53.669313 kubelet[2626]: I0512 13:37:53.669292 2626 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 12 13:37:53.669619 kubelet[2626]: I0512 13:37:53.669584 2626 server.go:490] "Adding debug handlers to kubelet server" May 12 13:37:53.670990 kubelet[2626]: I0512 13:37:53.670970 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 12 13:37:53.671206 kubelet[2626]: I0512 13:37:53.671184 2626 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 12 13:37:53.672151 kubelet[2626]: E0512 13:37:53.672123 2626 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 12 13:37:53.672219 kubelet[2626]: I0512 13:37:53.672161 2626 volume_manager.go:297] "Starting Kubelet Volume Manager" May 12 13:37:53.672485 kubelet[2626]: I0512 13:37:53.672300 2626 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 12 13:37:53.672485 kubelet[2626]: I0512 13:37:53.672415 2626 reconciler.go:26] "Reconciler: start to sync state" May 12 13:37:53.673214 kubelet[2626]: I0512 13:37:53.673181 2626 factory.go:221] Registration of the systemd container factory successfully May 12 13:37:53.673339 kubelet[2626]: I0512 13:37:53.673318 2626 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 12 13:37:53.673746 kubelet[2626]: E0512 13:37:53.673724 2626 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 12 13:37:53.677489 kubelet[2626]: I0512 13:37:53.677469 2626 factory.go:221] Registration of the containerd container factory successfully May 12 13:37:53.696755 kubelet[2626]: I0512 13:37:53.696717 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 12 13:37:53.698897 kubelet[2626]: I0512 13:37:53.698805 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 12 13:37:53.698897 kubelet[2626]: I0512 13:37:53.698861 2626 status_manager.go:227] "Starting to sync pod status with apiserver" May 12 13:37:53.698897 kubelet[2626]: I0512 13:37:53.698881 2626 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 12 13:37:53.698897 kubelet[2626]: I0512 13:37:53.698888 2626 kubelet.go:2388] "Starting kubelet main sync loop" May 12 13:37:53.699035 kubelet[2626]: E0512 13:37:53.698927 2626 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 12 13:37:53.718640 kubelet[2626]: I0512 13:37:53.718612 2626 cpu_manager.go:221] "Starting CPU manager" policy="none" May 12 13:37:53.718640 kubelet[2626]: I0512 13:37:53.718629 2626 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 12 13:37:53.718640 kubelet[2626]: I0512 13:37:53.718649 2626 state_mem.go:36] "Initialized new in-memory state store" May 12 13:37:53.718814 kubelet[2626]: I0512 13:37:53.718783 2626 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 12 13:37:53.718814 kubelet[2626]: I0512 13:37:53.718800 2626 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 12 13:37:53.718863 kubelet[2626]: I0512 13:37:53.718819 2626 policy_none.go:49] "None policy: Start" May 12 13:37:53.718863 kubelet[2626]: I0512 13:37:53.718827 2626 memory_manager.go:186] "Starting memorymanager" policy="None" May 12 13:37:53.718863 kubelet[2626]: I0512 13:37:53.718836 2626 state_mem.go:35] "Initializing new in-memory state store" May 12 13:37:53.718938 kubelet[2626]: I0512 13:37:53.718924 2626 state_mem.go:75] "Updated machine memory state" May 12 13:37:53.722635 kubelet[2626]: I0512 13:37:53.722609 2626 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 12 13:37:53.722798 kubelet[2626]: I0512 13:37:53.722774 2626 eviction_manager.go:189] "Eviction manager: starting control loop" May 12 13:37:53.722835 kubelet[2626]: I0512 13:37:53.722794 2626 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 12 13:37:53.723062 kubelet[2626]: I0512 13:37:53.723005 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 12 13:37:53.723988 kubelet[2626]: E0512 13:37:53.723933 2626 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 12 13:37:53.799985 kubelet[2626]: I0512 13:37:53.799911 2626 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 12 13:37:53.800140 kubelet[2626]: I0512 13:37:53.800031 2626 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 12 13:37:53.800503 kubelet[2626]: I0512 13:37:53.800478 2626 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 12 13:37:53.826854 kubelet[2626]: I0512 13:37:53.826819 2626 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 12 13:37:53.832330 kubelet[2626]: I0512 13:37:53.832303 2626 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 12 13:37:53.832413 kubelet[2626]: I0512 13:37:53.832372 2626 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 12 13:37:53.974457 kubelet[2626]: I0512 13:37:53.973829 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:53.974457 kubelet[2626]: I0512 13:37:53.973876 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 12 13:37:53.974457 kubelet[2626]: I0512 13:37:53.973898 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b5afee2550024b6eaedcf64d898190f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5afee2550024b6eaedcf64d898190f\") " pod="kube-system/kube-apiserver-localhost" May 12 13:37:53.974457 kubelet[2626]: I0512 13:37:53.973939 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:53.974457 kubelet[2626]: I0512 13:37:53.973991 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:53.974753 kubelet[2626]: I0512 13:37:53.974020 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:53.974753 kubelet[2626]: I0512 13:37:53.974050 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 12 13:37:53.974753 kubelet[2626]: I0512 13:37:53.974067 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b5afee2550024b6eaedcf64d898190f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b5afee2550024b6eaedcf64d898190f\") " pod="kube-system/kube-apiserver-localhost" May 12 13:37:53.974753 kubelet[2626]: I0512 13:37:53.974092 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b5afee2550024b6eaedcf64d898190f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b5afee2550024b6eaedcf64d898190f\") " pod="kube-system/kube-apiserver-localhost" May 12 13:37:54.105406 kubelet[2626]: E0512 13:37:54.105322 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:54.105546 kubelet[2626]: E0512 13:37:54.105458 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:54.105570 kubelet[2626]: E0512 13:37:54.105549 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:54.193638 sudo[2662]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 12 13:37:54.193909 sudo[2662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 12 13:37:54.620939 sudo[2662]: pam_unix(sudo:session): session closed for user root May 12 13:37:54.669126 kubelet[2626]: I0512 13:37:54.667826 2626 apiserver.go:52] "Watching apiserver" May 12 13:37:54.672548 kubelet[2626]: I0512 13:37:54.672495 2626 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 12 13:37:54.713604 kubelet[2626]: I0512 13:37:54.713010 2626 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 12 13:37:54.713604 kubelet[2626]: E0512 13:37:54.713272 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:54.713604 kubelet[2626]: I0512 13:37:54.713325 2626 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 12 13:37:54.719111 kubelet[2626]: E0512 13:37:54.719083 2626 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 12 13:37:54.719224 kubelet[2626]: E0512 13:37:54.719212 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:54.719338 kubelet[2626]: E0512 13:37:54.719096 2626 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 12 13:37:54.721190 kubelet[2626]: E0512 13:37:54.721168 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:54.736303 kubelet[2626]: I0512 13:37:54.736140 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.736127078 podStartE2EDuration="1.736127078s" podCreationTimestamp="2025-05-12 13:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:37:54.735829335 +0000 UTC m=+1.118503962" watchObservedRunningTime="2025-05-12 13:37:54.736127078 +0000 UTC m=+1.118801705" May 12 13:37:54.754413 kubelet[2626]: I0512 13:37:54.754360 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.754345928 podStartE2EDuration="1.754345928s" podCreationTimestamp="2025-05-12 13:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:37:54.748264187 +0000 UTC m=+1.130938814" watchObservedRunningTime="2025-05-12 13:37:54.754345928 +0000 UTC m=+1.137020555" May 12 13:37:55.714860 kubelet[2626]: E0512 13:37:55.714760 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:55.715214 kubelet[2626]: E0512 13:37:55.715012 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:55.715717 kubelet[2626]: E0512 13:37:55.715677 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:56.470095 sudo[1721]: pam_unix(sudo:session): session closed for user root May 12 13:37:56.471515 sshd[1720]: Connection closed by 10.0.0.1 port 35702 May 12 13:37:56.472083 sshd-session[1717]: pam_unix(sshd:session): session closed for user core May 12 13:37:56.475859 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:35702.service: Deactivated successfully. May 12 13:37:56.477672 systemd[1]: session-7.scope: Deactivated successfully. May 12 13:37:56.477883 systemd[1]: session-7.scope: Consumed 6.948s CPU time, 265.9M memory peak. May 12 13:37:56.478766 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. May 12 13:37:56.479813 systemd-logind[1500]: Removed session 7. May 12 13:37:58.818862 kubelet[2626]: E0512 13:37:58.818773 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:59.150740 kubelet[2626]: E0512 13:37:59.150646 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:37:59.165672 kubelet[2626]: I0512 13:37:59.165615 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.165600071 podStartE2EDuration="6.165600071s" podCreationTimestamp="2025-05-12 13:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:37:54.75461502 +0000 UTC m=+1.137289647" watchObservedRunningTime="2025-05-12 13:37:59.165600071 +0000 UTC m=+5.548274658" May 12 13:37:59.449524 kubelet[2626]: I0512 13:37:59.449432 2626 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 12 13:37:59.449880 containerd[1525]: time="2025-05-12T13:37:59.449842902Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 12 13:37:59.450415 kubelet[2626]: I0512 13:37:59.450013 2626 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 12 13:37:59.719663 kubelet[2626]: E0512 13:37:59.719232 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.456921 systemd[1]: Created slice kubepods-besteffort-podbc4c00a8_f045_4166_a4a6_86b19c3ce1f2.slice - libcontainer container kubepods-besteffort-podbc4c00a8_f045_4166_a4a6_86b19c3ce1f2.slice. May 12 13:38:00.471090 systemd[1]: Created slice kubepods-burstable-pod5626306d_44f4_4250_8599_fda8cfda8403.slice - libcontainer container kubepods-burstable-pod5626306d_44f4_4250_8599_fda8cfda8403.slice. May 12 13:38:00.519616 kubelet[2626]: I0512 13:38:00.519551 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bc4c00a8-f045-4166-a4a6-86b19c3ce1f2-kube-proxy\") pod \"kube-proxy-6hmng\" (UID: \"bc4c00a8-f045-4166-a4a6-86b19c3ce1f2\") " pod="kube-system/kube-proxy-6hmng" May 12 13:38:00.519951 kubelet[2626]: I0512 13:38:00.519629 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc4c00a8-f045-4166-a4a6-86b19c3ce1f2-xtables-lock\") pod \"kube-proxy-6hmng\" (UID: \"bc4c00a8-f045-4166-a4a6-86b19c3ce1f2\") " pod="kube-system/kube-proxy-6hmng" May 12 13:38:00.519951 kubelet[2626]: I0512 13:38:00.519649 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-run\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.519951 kubelet[2626]: I0512 13:38:00.519663 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-bpf-maps\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.519951 kubelet[2626]: I0512 13:38:00.519698 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-net\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.519951 kubelet[2626]: I0512 13:38:00.519718 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vkqv\" (UniqueName: \"kubernetes.io/projected/bc4c00a8-f045-4166-a4a6-86b19c3ce1f2-kube-api-access-4vkqv\") pod \"kube-proxy-6hmng\" (UID: \"bc4c00a8-f045-4166-a4a6-86b19c3ce1f2\") " pod="kube-system/kube-proxy-6hmng" May 12 13:38:00.520104 kubelet[2626]: I0512 13:38:00.519734 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl6f\" (UniqueName: \"kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-kube-api-access-2wl6f\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520104 kubelet[2626]: I0512 13:38:00.519775 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-hubble-tls\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520104 kubelet[2626]: I0512 13:38:00.519809 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-cgroup\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520104 kubelet[2626]: I0512 13:38:00.519844 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cni-path\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520104 kubelet[2626]: I0512 13:38:00.519859 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-xtables-lock\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520104 kubelet[2626]: I0512 13:38:00.519873 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5626306d-44f4-4250-8599-fda8cfda8403-clustermesh-secrets\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520221 kubelet[2626]: I0512 13:38:00.519889 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-hostproc\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520221 kubelet[2626]: I0512 13:38:00.519948 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-etc-cni-netd\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520221 kubelet[2626]: I0512 13:38:00.519963 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5626306d-44f4-4250-8599-fda8cfda8403-cilium-config-path\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520221 kubelet[2626]: I0512 13:38:00.519979 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-kernel\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.520221 kubelet[2626]: I0512 13:38:00.519996 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc4c00a8-f045-4166-a4a6-86b19c3ce1f2-lib-modules\") pod \"kube-proxy-6hmng\" (UID: \"bc4c00a8-f045-4166-a4a6-86b19c3ce1f2\") " pod="kube-system/kube-proxy-6hmng" May 12 13:38:00.520221 kubelet[2626]: I0512 13:38:00.520018 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-lib-modules\") pod \"cilium-4s924\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " pod="kube-system/cilium-4s924" May 12 13:38:00.548450 systemd[1]: Created slice kubepods-besteffort-pod38ba2b40_3686_440e_b70b_6c2b9d4a4bc6.slice - libcontainer container kubepods-besteffort-pod38ba2b40_3686_440e_b70b_6c2b9d4a4bc6.slice. May 12 13:38:00.620596 kubelet[2626]: I0512 13:38:00.620548 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-l645x\" (UID: \"38ba2b40-3686-440e-b70b-6c2b9d4a4bc6\") " pod="kube-system/cilium-operator-6c4d7847fc-l645x" May 12 13:38:00.620596 kubelet[2626]: I0512 13:38:00.620600 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9sfw\" (UniqueName: \"kubernetes.io/projected/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-kube-api-access-b9sfw\") pod \"cilium-operator-6c4d7847fc-l645x\" (UID: \"38ba2b40-3686-440e-b70b-6c2b9d4a4bc6\") " pod="kube-system/cilium-operator-6c4d7847fc-l645x" May 12 13:38:00.721969 kubelet[2626]: E0512 13:38:00.721583 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.772137 kubelet[2626]: E0512 13:38:00.772106 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.772589 containerd[1525]: time="2025-05-12T13:38:00.772553050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6hmng,Uid:bc4c00a8-f045-4166-a4a6-86b19c3ce1f2,Namespace:kube-system,Attempt:0,}" May 12 13:38:00.774465 kubelet[2626]: E0512 13:38:00.774428 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.774790 containerd[1525]: time="2025-05-12T13:38:00.774749048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4s924,Uid:5626306d-44f4-4250-8599-fda8cfda8403,Namespace:kube-system,Attempt:0,}" May 12 13:38:00.793115 containerd[1525]: time="2025-05-12T13:38:00.792285904Z" level=info msg="connecting to shim 62a07d06ff98fa6b6a734cbe2cd82bd86743ea2f21965d6e94345c62d12ad89c" address="unix:///run/containerd/s/79a8b7f00fa4953b70396cf9f2faf10307d6d3c13e095433da6d31d24780cc22" namespace=k8s.io protocol=ttrpc version=3 May 12 13:38:00.794007 containerd[1525]: time="2025-05-12T13:38:00.793741448Z" level=info msg="connecting to shim 98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6" address="unix:///run/containerd/s/a22b0c111231e01f135555c7e61d8dd3960ba28eebc726a5da235e60e96b6486" namespace=k8s.io protocol=ttrpc version=3 May 12 13:38:00.819200 systemd[1]: Started cri-containerd-62a07d06ff98fa6b6a734cbe2cd82bd86743ea2f21965d6e94345c62d12ad89c.scope - libcontainer container 62a07d06ff98fa6b6a734cbe2cd82bd86743ea2f21965d6e94345c62d12ad89c. May 12 13:38:00.820280 systemd[1]: Started cri-containerd-98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6.scope - libcontainer container 98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6. May 12 13:38:00.842150 containerd[1525]: time="2025-05-12T13:38:00.842082849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6hmng,Uid:bc4c00a8-f045-4166-a4a6-86b19c3ce1f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"62a07d06ff98fa6b6a734cbe2cd82bd86743ea2f21965d6e94345c62d12ad89c\"" May 12 13:38:00.843451 kubelet[2626]: E0512 13:38:00.843428 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.846283 containerd[1525]: time="2025-05-12T13:38:00.846239349Z" level=info msg="CreateContainer within sandbox \"62a07d06ff98fa6b6a734cbe2cd82bd86743ea2f21965d6e94345c62d12ad89c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 12 13:38:00.850715 containerd[1525]: time="2025-05-12T13:38:00.850667234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4s924,Uid:5626306d-44f4-4250-8599-fda8cfda8403,Namespace:kube-system,Attempt:0,} returns sandbox id \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\"" May 12 13:38:00.851438 kubelet[2626]: E0512 13:38:00.851415 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.853242 containerd[1525]: time="2025-05-12T13:38:00.853207153Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 12 13:38:00.853875 kubelet[2626]: E0512 13:38:00.853612 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.853994 containerd[1525]: time="2025-05-12T13:38:00.853963931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l645x,Uid:38ba2b40-3686-440e-b70b-6c2b9d4a4bc6,Namespace:kube-system,Attempt:0,}" May 12 13:38:00.858539 containerd[1525]: time="2025-05-12T13:38:00.858506003Z" level=info msg="Container d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:00.869258 containerd[1525]: time="2025-05-12T13:38:00.869209367Z" level=info msg="CreateContainer within sandbox \"62a07d06ff98fa6b6a734cbe2cd82bd86743ea2f21965d6e94345c62d12ad89c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d\"" May 12 13:38:00.870486 containerd[1525]: time="2025-05-12T13:38:00.870429215Z" level=info msg="StartContainer for \"d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d\"" May 12 13:38:00.872267 containerd[1525]: time="2025-05-12T13:38:00.872219517Z" level=info msg="connecting to shim d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d" address="unix:///run/containerd/s/79a8b7f00fa4953b70396cf9f2faf10307d6d3c13e095433da6d31d24780cc22" protocol=ttrpc version=3 May 12 13:38:00.882205 containerd[1525]: time="2025-05-12T13:38:00.881761647Z" level=info msg="connecting to shim 8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c" address="unix:///run/containerd/s/6251e194f88b2a659f1d86500b4f84e379776ac5ed7ea50c8288cbea5b0aea08" namespace=k8s.io protocol=ttrpc version=3 May 12 13:38:00.895187 systemd[1]: Started cri-containerd-d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d.scope - libcontainer container d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d. May 12 13:38:00.898256 systemd[1]: Started cri-containerd-8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c.scope - libcontainer container 8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c. May 12 13:38:00.935013 containerd[1525]: time="2025-05-12T13:38:00.934937229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-l645x,Uid:38ba2b40-3686-440e-b70b-6c2b9d4a4bc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\"" May 12 13:38:00.937115 kubelet[2626]: E0512 13:38:00.936648 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:00.939271 containerd[1525]: time="2025-05-12T13:38:00.937588574Z" level=info msg="StartContainer for \"d8b7dc92c6c1b1332f3fc49e040a15195c7007d861ad0dfa363fd6db264e554d\" returns successfully" May 12 13:38:01.182793 kubelet[2626]: E0512 13:38:01.182747 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:01.727802 kubelet[2626]: E0512 13:38:01.727728 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:01.728907 kubelet[2626]: E0512 13:38:01.728843 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:01.736742 kubelet[2626]: I0512 13:38:01.736681 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6hmng" podStartSLOduration=1.736667256 podStartE2EDuration="1.736667256s" podCreationTimestamp="2025-05-12 13:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:38:01.736262165 +0000 UTC m=+8.118936792" watchObservedRunningTime="2025-05-12 13:38:01.736667256 +0000 UTC m=+8.119341843" May 12 13:38:07.487164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount790262099.mount: Deactivated successfully. May 12 13:38:08.698967 containerd[1525]: time="2025-05-12T13:38:08.698668431Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:38:08.699981 containerd[1525]: time="2025-05-12T13:38:08.699946066Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 12 13:38:08.700873 containerd[1525]: time="2025-05-12T13:38:08.700830921Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:38:08.702217 containerd[1525]: time="2025-05-12T13:38:08.702105275Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.848861475s" May 12 13:38:08.702217 containerd[1525]: time="2025-05-12T13:38:08.702139761Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 12 13:38:08.706127 containerd[1525]: time="2025-05-12T13:38:08.706096564Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 12 13:38:08.708217 containerd[1525]: time="2025-05-12T13:38:08.708174761Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 12 13:38:08.713970 containerd[1525]: time="2025-05-12T13:38:08.713589908Z" level=info msg="Container d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:08.730239 containerd[1525]: time="2025-05-12T13:38:08.730199042Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\"" May 12 13:38:08.730516 containerd[1525]: time="2025-05-12T13:38:08.730492527Z" level=info msg="StartContainer for \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\"" May 12 13:38:08.731240 containerd[1525]: time="2025-05-12T13:38:08.731203876Z" level=info msg="connecting to shim d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6" address="unix:///run/containerd/s/a22b0c111231e01f135555c7e61d8dd3960ba28eebc726a5da235e60e96b6486" protocol=ttrpc version=3 May 12 13:38:08.766183 systemd[1]: Started cri-containerd-d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6.scope - libcontainer container d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6. May 12 13:38:08.790648 containerd[1525]: time="2025-05-12T13:38:08.790560893Z" level=info msg="StartContainer for \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" returns successfully" May 12 13:38:08.829751 kubelet[2626]: E0512 13:38:08.827658 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:08.863927 systemd[1]: cri-containerd-d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6.scope: Deactivated successfully. May 12 13:38:08.864305 systemd[1]: cri-containerd-d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6.scope: Consumed 83ms CPU time, 8.8M memory peak, 3.1M written to disk. May 12 13:38:08.970030 containerd[1525]: time="2025-05-12T13:38:08.969863654Z" level=info msg="received exit event container_id:\"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" id:\"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" pid:3053 exited_at:{seconds:1747057088 nanos:960778347}" May 12 13:38:08.977194 containerd[1525]: time="2025-05-12T13:38:08.977089836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" id:\"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" pid:3053 exited_at:{seconds:1747057088 nanos:960778347}" May 12 13:38:09.010813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6-rootfs.mount: Deactivated successfully. May 12 13:38:09.748090 kubelet[2626]: E0512 13:38:09.747927 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:09.750215 containerd[1525]: time="2025-05-12T13:38:09.750170395Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 12 13:38:09.760730 containerd[1525]: time="2025-05-12T13:38:09.760682679Z" level=info msg="Container df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:09.803260 containerd[1525]: time="2025-05-12T13:38:09.803202960Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\"" May 12 13:38:09.803677 containerd[1525]: time="2025-05-12T13:38:09.803609298Z" level=info msg="StartContainer for \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\"" May 12 13:38:09.804742 containerd[1525]: time="2025-05-12T13:38:09.804677213Z" level=info msg="connecting to shim df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0" address="unix:///run/containerd/s/a22b0c111231e01f135555c7e61d8dd3960ba28eebc726a5da235e60e96b6486" protocol=ttrpc version=3 May 12 13:38:09.825216 systemd[1]: Started cri-containerd-df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0.scope - libcontainer container df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0. May 12 13:38:09.849219 containerd[1525]: time="2025-05-12T13:38:09.849142736Z" level=info msg="StartContainer for \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" returns successfully" May 12 13:38:09.870607 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 12 13:38:09.870814 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 12 13:38:09.871126 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 12 13:38:09.872370 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 12 13:38:09.874162 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 12 13:38:09.874669 systemd[1]: cri-containerd-df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0.scope: Deactivated successfully. May 12 13:38:09.881646 containerd[1525]: time="2025-05-12T13:38:09.881499384Z" level=info msg="received exit event container_id:\"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" id:\"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" pid:3099 exited_at:{seconds:1747057089 nanos:881186739}" May 12 13:38:09.881894 containerd[1525]: time="2025-05-12T13:38:09.881870158Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" id:\"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" pid:3099 exited_at:{seconds:1747057089 nanos:881186739}" May 12 13:38:09.902728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 12 13:38:10.514752 update_engine[1508]: I20250512 13:38:10.514684 1508 update_attempter.cc:509] Updating boot flags... May 12 13:38:10.549069 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3152) May 12 13:38:10.599011 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3151) May 12 13:38:10.647165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3151) May 12 13:38:10.753213 kubelet[2626]: E0512 13:38:10.753153 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:10.759163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0-rootfs.mount: Deactivated successfully. May 12 13:38:10.762198 containerd[1525]: time="2025-05-12T13:38:10.762164380Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 12 13:38:10.826026 containerd[1525]: time="2025-05-12T13:38:10.825986326Z" level=info msg="Container b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:10.827199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365362013.mount: Deactivated successfully. May 12 13:38:10.834352 containerd[1525]: time="2025-05-12T13:38:10.834314073Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\"" May 12 13:38:10.835182 containerd[1525]: time="2025-05-12T13:38:10.834896473Z" level=info msg="StartContainer for \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\"" May 12 13:38:10.837011 containerd[1525]: time="2025-05-12T13:38:10.836974039Z" level=info msg="connecting to shim b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595" address="unix:///run/containerd/s/a22b0c111231e01f135555c7e61d8dd3960ba28eebc726a5da235e60e96b6486" protocol=ttrpc version=3 May 12 13:38:10.851310 containerd[1525]: time="2025-05-12T13:38:10.851257846Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:38:10.852430 containerd[1525]: time="2025-05-12T13:38:10.852348876Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 12 13:38:10.853347 containerd[1525]: time="2025-05-12T13:38:10.853309808Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 12 13:38:10.854195 systemd[1]: Started cri-containerd-b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595.scope - libcontainer container b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595. May 12 13:38:10.855889 containerd[1525]: time="2025-05-12T13:38:10.855841837Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.149712588s" May 12 13:38:10.856264 containerd[1525]: time="2025-05-12T13:38:10.855899005Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 12 13:38:10.859015 containerd[1525]: time="2025-05-12T13:38:10.858929182Z" level=info msg="CreateContainer within sandbox \"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 12 13:38:10.866070 containerd[1525]: time="2025-05-12T13:38:10.866016037Z" level=info msg="Container 886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:10.871552 containerd[1525]: time="2025-05-12T13:38:10.871505593Z" level=info msg="CreateContainer within sandbox \"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\"" May 12 13:38:10.872385 containerd[1525]: time="2025-05-12T13:38:10.872354590Z" level=info msg="StartContainer for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\"" May 12 13:38:10.873230 containerd[1525]: time="2025-05-12T13:38:10.873199786Z" level=info msg="connecting to shim 886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4" address="unix:///run/containerd/s/6251e194f88b2a659f1d86500b4f84e379776ac5ed7ea50c8288cbea5b0aea08" protocol=ttrpc version=3 May 12 13:38:10.893448 systemd[1]: Started cri-containerd-886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4.scope - libcontainer container 886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4. May 12 13:38:10.904001 containerd[1525]: time="2025-05-12T13:38:10.903954341Z" level=info msg="StartContainer for \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" returns successfully" May 12 13:38:10.910999 systemd[1]: cri-containerd-b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595.scope: Deactivated successfully. May 12 13:38:10.911380 containerd[1525]: time="2025-05-12T13:38:10.911147971Z" level=info msg="received exit event container_id:\"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" id:\"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" pid:3177 exited_at:{seconds:1747057090 nanos:910944263}" May 12 13:38:10.911380 containerd[1525]: time="2025-05-12T13:38:10.911253305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" id:\"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" pid:3177 exited_at:{seconds:1747057090 nanos:910944263}" May 12 13:38:10.931096 containerd[1525]: time="2025-05-12T13:38:10.931000424Z" level=info msg="StartContainer for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" returns successfully" May 12 13:38:11.758557 kubelet[2626]: E0512 13:38:11.758503 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:11.761900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595-rootfs.mount: Deactivated successfully. May 12 13:38:11.763533 kubelet[2626]: E0512 13:38:11.763499 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:11.764337 containerd[1525]: time="2025-05-12T13:38:11.764264298Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 12 13:38:11.774479 containerd[1525]: time="2025-05-12T13:38:11.774386943Z" level=info msg="Container c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:11.783460 containerd[1525]: time="2025-05-12T13:38:11.783375120Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\"" May 12 13:38:11.783924 containerd[1525]: time="2025-05-12T13:38:11.783895228Z" level=info msg="StartContainer for \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\"" May 12 13:38:11.784798 containerd[1525]: time="2025-05-12T13:38:11.784764661Z" level=info msg="connecting to shim c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f" address="unix:///run/containerd/s/a22b0c111231e01f135555c7e61d8dd3960ba28eebc726a5da235e60e96b6486" protocol=ttrpc version=3 May 12 13:38:11.812236 systemd[1]: Started cri-containerd-c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f.scope - libcontainer container c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f. May 12 13:38:11.836301 systemd[1]: cri-containerd-c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f.scope: Deactivated successfully. May 12 13:38:11.837684 containerd[1525]: time="2025-05-12T13:38:11.837645784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" id:\"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" pid:3249 exited_at:{seconds:1747057091 nanos:837436476}" May 12 13:38:11.837806 containerd[1525]: time="2025-05-12T13:38:11.837785202Z" level=info msg="received exit event container_id:\"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" id:\"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" pid:3249 exited_at:{seconds:1747057091 nanos:837436476}" May 12 13:38:11.844215 containerd[1525]: time="2025-05-12T13:38:11.844182960Z" level=info msg="StartContainer for \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" returns successfully" May 12 13:38:11.853938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f-rootfs.mount: Deactivated successfully. May 12 13:38:12.768778 kubelet[2626]: E0512 13:38:12.768745 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:12.772964 kubelet[2626]: E0512 13:38:12.769014 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:12.773067 containerd[1525]: time="2025-05-12T13:38:12.772387820Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 12 13:38:12.789636 kubelet[2626]: I0512 13:38:12.789575 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-l645x" podStartSLOduration=2.871786391 podStartE2EDuration="12.789560239s" podCreationTimestamp="2025-05-12 13:38:00 +0000 UTC" firstStartedPulling="2025-05-12 13:38:00.938693435 +0000 UTC m=+7.321368062" lastFinishedPulling="2025-05-12 13:38:10.856467283 +0000 UTC m=+17.239141910" observedRunningTime="2025-05-12 13:38:11.797879458 +0000 UTC m=+18.180554085" watchObservedRunningTime="2025-05-12 13:38:12.789560239 +0000 UTC m=+19.172234866" May 12 13:38:12.790147 containerd[1525]: time="2025-05-12T13:38:12.790099346Z" level=info msg="Container 6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:12.793087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835657163.mount: Deactivated successfully. May 12 13:38:12.796346 containerd[1525]: time="2025-05-12T13:38:12.796310079Z" level=info msg="CreateContainer within sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\"" May 12 13:38:12.796938 containerd[1525]: time="2025-05-12T13:38:12.796912875Z" level=info msg="StartContainer for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\"" May 12 13:38:12.797901 containerd[1525]: time="2025-05-12T13:38:12.797864313Z" level=info msg="connecting to shim 6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f" address="unix:///run/containerd/s/a22b0c111231e01f135555c7e61d8dd3960ba28eebc726a5da235e60e96b6486" protocol=ttrpc version=3 May 12 13:38:12.818259 systemd[1]: Started cri-containerd-6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f.scope - libcontainer container 6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f. May 12 13:38:12.846593 containerd[1525]: time="2025-05-12T13:38:12.846556178Z" level=info msg="StartContainer for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" returns successfully" May 12 13:38:12.937776 containerd[1525]: time="2025-05-12T13:38:12.937716293Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" id:\"bdebd1ba60f93ea952694b8a90ce523d170f25e8c862ab04547d5a97d640d34b\" pid:3317 exited_at:{seconds:1747057092 nanos:937422697}" May 12 13:38:12.990896 kubelet[2626]: I0512 13:38:12.990874 2626 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 12 13:38:13.026816 systemd[1]: Created slice kubepods-burstable-poddeb99d32_7b9c_4931_b9c0_247253e13b2d.slice - libcontainer container kubepods-burstable-poddeb99d32_7b9c_4931_b9c0_247253e13b2d.slice. May 12 13:38:13.033335 systemd[1]: Created slice kubepods-burstable-pod9500f94e_fdc1_4e0a_8829_0a6c844c2395.slice - libcontainer container kubepods-burstable-pod9500f94e_fdc1_4e0a_8829_0a6c844c2395.slice. May 12 13:38:13.106964 kubelet[2626]: I0512 13:38:13.106885 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzch2\" (UniqueName: \"kubernetes.io/projected/deb99d32-7b9c-4931-b9c0-247253e13b2d-kube-api-access-gzch2\") pod \"coredns-668d6bf9bc-kvpq4\" (UID: \"deb99d32-7b9c-4931-b9c0-247253e13b2d\") " pod="kube-system/coredns-668d6bf9bc-kvpq4" May 12 13:38:13.106964 kubelet[2626]: I0512 13:38:13.106922 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deb99d32-7b9c-4931-b9c0-247253e13b2d-config-volume\") pod \"coredns-668d6bf9bc-kvpq4\" (UID: \"deb99d32-7b9c-4931-b9c0-247253e13b2d\") " pod="kube-system/coredns-668d6bf9bc-kvpq4" May 12 13:38:13.106964 kubelet[2626]: I0512 13:38:13.106940 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgsk\" (UniqueName: \"kubernetes.io/projected/9500f94e-fdc1-4e0a-8829-0a6c844c2395-kube-api-access-xqgsk\") pod \"coredns-668d6bf9bc-p48p8\" (UID: \"9500f94e-fdc1-4e0a-8829-0a6c844c2395\") " pod="kube-system/coredns-668d6bf9bc-p48p8" May 12 13:38:13.106964 kubelet[2626]: I0512 13:38:13.106960 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9500f94e-fdc1-4e0a-8829-0a6c844c2395-config-volume\") pod \"coredns-668d6bf9bc-p48p8\" (UID: \"9500f94e-fdc1-4e0a-8829-0a6c844c2395\") " pod="kube-system/coredns-668d6bf9bc-p48p8" May 12 13:38:13.330983 kubelet[2626]: E0512 13:38:13.330767 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:13.331700 containerd[1525]: time="2025-05-12T13:38:13.331662391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kvpq4,Uid:deb99d32-7b9c-4931-b9c0-247253e13b2d,Namespace:kube-system,Attempt:0,}" May 12 13:38:13.345137 kubelet[2626]: E0512 13:38:13.345109 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:13.349164 containerd[1525]: time="2025-05-12T13:38:13.348882513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p48p8,Uid:9500f94e-fdc1-4e0a-8829-0a6c844c2395,Namespace:kube-system,Attempt:0,}" May 12 13:38:13.776882 kubelet[2626]: E0512 13:38:13.776763 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:13.793478 kubelet[2626]: I0512 13:38:13.793415 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4s924" podStartSLOduration=5.940013297 podStartE2EDuration="13.793397238s" podCreationTimestamp="2025-05-12 13:38:00 +0000 UTC" firstStartedPulling="2025-05-12 13:38:00.852553479 +0000 UTC m=+7.235228106" lastFinishedPulling="2025-05-12 13:38:08.70593742 +0000 UTC m=+15.088612047" observedRunningTime="2025-05-12 13:38:13.792521054 +0000 UTC m=+20.175195681" watchObservedRunningTime="2025-05-12 13:38:13.793397238 +0000 UTC m=+20.176071905" May 12 13:38:14.778973 kubelet[2626]: E0512 13:38:14.778942 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:14.976460 systemd-networkd[1429]: cilium_host: Link UP May 12 13:38:14.976579 systemd-networkd[1429]: cilium_net: Link UP May 12 13:38:14.976582 systemd-networkd[1429]: cilium_net: Gained carrier May 12 13:38:14.976721 systemd-networkd[1429]: cilium_host: Gained carrier May 12 13:38:15.069692 systemd-networkd[1429]: cilium_vxlan: Link UP May 12 13:38:15.069698 systemd-networkd[1429]: cilium_vxlan: Gained carrier May 12 13:38:15.306214 systemd-networkd[1429]: cilium_host: Gained IPv6LL May 12 13:38:15.377078 kernel: NET: Registered PF_ALG protocol family May 12 13:38:15.642211 systemd-networkd[1429]: cilium_net: Gained IPv6LL May 12 13:38:15.780940 kubelet[2626]: E0512 13:38:15.780902 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:15.925724 systemd-networkd[1429]: lxc_health: Link UP May 12 13:38:15.926752 systemd-networkd[1429]: lxc_health: Gained carrier May 12 13:38:16.346178 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL May 12 13:38:16.438077 kernel: eth0: renamed from tmpf8582 May 12 13:38:16.454930 systemd-networkd[1429]: lxc1d4bf98a33fe: Link UP May 12 13:38:16.456069 kernel: eth0: renamed from tmp5a1d1 May 12 13:38:16.463581 systemd-networkd[1429]: lxc19a5a9e8537d: Link UP May 12 13:38:16.463843 systemd-networkd[1429]: lxc1d4bf98a33fe: Gained carrier May 12 13:38:16.464917 systemd-networkd[1429]: lxc19a5a9e8537d: Gained carrier May 12 13:38:16.785033 kubelet[2626]: E0512 13:38:16.784406 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:17.627961 systemd-networkd[1429]: lxc1d4bf98a33fe: Gained IPv6LL May 12 13:38:17.885195 systemd-networkd[1429]: lxc_health: Gained IPv6LL May 12 13:38:18.138232 systemd-networkd[1429]: lxc19a5a9e8537d: Gained IPv6LL May 12 13:38:19.920902 containerd[1525]: time="2025-05-12T13:38:19.920835879Z" level=info msg="connecting to shim 5a1d130f88af09b4ed9e6d4b7e2cbc648bff0bf8298cae975efe85f357594074" address="unix:///run/containerd/s/19df77f05b8ca6e5e0177bedea82db9d02d71feee1523bfa8f051426f4badddd" namespace=k8s.io protocol=ttrpc version=3 May 12 13:38:19.922272 containerd[1525]: time="2025-05-12T13:38:19.921841530Z" level=info msg="connecting to shim f858217162ba1dfbfc26945821b688acbf3de9991043ca8aad13280aef2b43d8" address="unix:///run/containerd/s/bbce35ac6dc4f6f21037042c2ecacc7d705cc281bc337bdda3ef6ee77df1e41c" namespace=k8s.io protocol=ttrpc version=3 May 12 13:38:19.950365 systemd[1]: Started cri-containerd-5a1d130f88af09b4ed9e6d4b7e2cbc648bff0bf8298cae975efe85f357594074.scope - libcontainer container 5a1d130f88af09b4ed9e6d4b7e2cbc648bff0bf8298cae975efe85f357594074. May 12 13:38:19.951446 systemd[1]: Started cri-containerd-f858217162ba1dfbfc26945821b688acbf3de9991043ca8aad13280aef2b43d8.scope - libcontainer container f858217162ba1dfbfc26945821b688acbf3de9991043ca8aad13280aef2b43d8. May 12 13:38:19.961427 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:38:19.963091 systemd-resolved[1344]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 12 13:38:19.984525 containerd[1525]: time="2025-05-12T13:38:19.984487765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kvpq4,Uid:deb99d32-7b9c-4931-b9c0-247253e13b2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f858217162ba1dfbfc26945821b688acbf3de9991043ca8aad13280aef2b43d8\"" May 12 13:38:19.985215 kubelet[2626]: E0512 13:38:19.985191 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:19.987118 containerd[1525]: time="2025-05-12T13:38:19.986631598Z" level=info msg="CreateContainer within sandbox \"f858217162ba1dfbfc26945821b688acbf3de9991043ca8aad13280aef2b43d8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 13:38:19.987456 containerd[1525]: time="2025-05-12T13:38:19.987415229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p48p8,Uid:9500f94e-fdc1-4e0a-8829-0a6c844c2395,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a1d130f88af09b4ed9e6d4b7e2cbc648bff0bf8298cae975efe85f357594074\"" May 12 13:38:19.988197 kubelet[2626]: E0512 13:38:19.988177 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:19.990375 containerd[1525]: time="2025-05-12T13:38:19.990343732Z" level=info msg="CreateContainer within sandbox \"5a1d130f88af09b4ed9e6d4b7e2cbc648bff0bf8298cae975efe85f357594074\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 12 13:38:19.997036 containerd[1525]: time="2025-05-12T13:38:19.997008732Z" level=info msg="Container 35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:20.002554 containerd[1525]: time="2025-05-12T13:38:20.002466057Z" level=info msg="Container 48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d: CDI devices from CRI Config.CDIDevices: []" May 12 13:38:20.002554 containerd[1525]: time="2025-05-12T13:38:20.002495500Z" level=info msg="CreateContainer within sandbox \"f858217162ba1dfbfc26945821b688acbf3de9991043ca8aad13280aef2b43d8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476\"" May 12 13:38:20.003158 containerd[1525]: time="2025-05-12T13:38:20.003130435Z" level=info msg="StartContainer for \"35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476\"" May 12 13:38:20.004000 containerd[1525]: time="2025-05-12T13:38:20.003936424Z" level=info msg="connecting to shim 35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476" address="unix:///run/containerd/s/bbce35ac6dc4f6f21037042c2ecacc7d705cc281bc337bdda3ef6ee77df1e41c" protocol=ttrpc version=3 May 12 13:38:20.008542 containerd[1525]: time="2025-05-12T13:38:20.008441132Z" level=info msg="CreateContainer within sandbox \"5a1d130f88af09b4ed9e6d4b7e2cbc648bff0bf8298cae975efe85f357594074\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d\"" May 12 13:38:20.009118 containerd[1525]: time="2025-05-12T13:38:20.009084188Z" level=info msg="StartContainer for \"48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d\"" May 12 13:38:20.009986 containerd[1525]: time="2025-05-12T13:38:20.009949262Z" level=info msg="connecting to shim 48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d" address="unix:///run/containerd/s/19df77f05b8ca6e5e0177bedea82db9d02d71feee1523bfa8f051426f4badddd" protocol=ttrpc version=3 May 12 13:38:20.025197 systemd[1]: Started cri-containerd-35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476.scope - libcontainer container 35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476. May 12 13:38:20.028091 systemd[1]: Started cri-containerd-48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d.scope - libcontainer container 48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d. May 12 13:38:20.055358 containerd[1525]: time="2025-05-12T13:38:20.055317812Z" level=info msg="StartContainer for \"35077a1774fe388d521069062b3924d1275d4b30ed0327f17670ecf568d42476\" returns successfully" May 12 13:38:20.061937 containerd[1525]: time="2025-05-12T13:38:20.061899019Z" level=info msg="StartContainer for \"48bcf64d2e00810deb1941f8e43f5225744cb0da2b835ca3bbc5c6440d01747d\" returns successfully" May 12 13:38:20.791707 kubelet[2626]: E0512 13:38:20.791534 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:20.799534 kubelet[2626]: E0512 13:38:20.799510 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:20.814544 kubelet[2626]: I0512 13:38:20.814477 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kvpq4" podStartSLOduration=20.814460272 podStartE2EDuration="20.814460272s" podCreationTimestamp="2025-05-12 13:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:38:20.802657975 +0000 UTC m=+27.185332602" watchObservedRunningTime="2025-05-12 13:38:20.814460272 +0000 UTC m=+27.197134899" May 12 13:38:20.904441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884275884.mount: Deactivated successfully. May 12 13:38:21.801464 kubelet[2626]: E0512 13:38:21.801358 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:21.801942 kubelet[2626]: E0512 13:38:21.801665 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:21.838788 kubelet[2626]: I0512 13:38:21.838741 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 12 13:38:21.839519 kubelet[2626]: E0512 13:38:21.839452 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:21.854034 kubelet[2626]: I0512 13:38:21.853968 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p48p8" podStartSLOduration=21.853952263 podStartE2EDuration="21.853952263s" podCreationTimestamp="2025-05-12 13:38:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:38:20.822785389 +0000 UTC m=+27.205459976" watchObservedRunningTime="2025-05-12 13:38:21.853952263 +0000 UTC m=+28.236626890" May 12 13:38:22.609513 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:46834.service - OpenSSH per-connection server daemon (10.0.0.1:46834). May 12 13:38:22.671914 sshd[3962]: Accepted publickey for core from 10.0.0.1 port 46834 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:22.673100 sshd-session[3962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:22.676722 systemd-logind[1500]: New session 8 of user core. May 12 13:38:22.690244 systemd[1]: Started session-8.scope - Session 8 of User core. May 12 13:38:22.802990 kubelet[2626]: E0512 13:38:22.802878 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:22.803305 kubelet[2626]: E0512 13:38:22.803202 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:22.803305 kubelet[2626]: E0512 13:38:22.803299 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:38:22.826971 sshd[3964]: Connection closed by 10.0.0.1 port 46834 May 12 13:38:22.827299 sshd-session[3962]: pam_unix(sshd:session): session closed for user core May 12 13:38:22.831044 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. May 12 13:38:22.831328 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:46834.service: Deactivated successfully. May 12 13:38:22.833677 systemd[1]: session-8.scope: Deactivated successfully. May 12 13:38:22.834661 systemd-logind[1500]: Removed session 8. May 12 13:38:27.839509 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:46838.service - OpenSSH per-connection server daemon (10.0.0.1:46838). May 12 13:38:27.896375 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 46838 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:27.897579 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:27.901817 systemd-logind[1500]: New session 9 of user core. May 12 13:38:27.910192 systemd[1]: Started session-9.scope - Session 9 of User core. May 12 13:38:28.017747 sshd[3982]: Connection closed by 10.0.0.1 port 46838 May 12 13:38:28.018085 sshd-session[3980]: pam_unix(sshd:session): session closed for user core May 12 13:38:28.020866 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:46838.service: Deactivated successfully. May 12 13:38:28.022561 systemd[1]: session-9.scope: Deactivated successfully. May 12 13:38:28.023803 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. May 12 13:38:28.024662 systemd-logind[1500]: Removed session 9. May 12 13:38:33.030228 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:57940.service - OpenSSH per-connection server daemon (10.0.0.1:57940). May 12 13:38:33.084791 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 57940 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:33.085910 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:33.089746 systemd-logind[1500]: New session 10 of user core. May 12 13:38:33.101206 systemd[1]: Started session-10.scope - Session 10 of User core. May 12 13:38:33.212291 sshd[4003]: Connection closed by 10.0.0.1 port 57940 May 12 13:38:33.212790 sshd-session[4001]: pam_unix(sshd:session): session closed for user core May 12 13:38:33.216161 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:57940.service: Deactivated successfully. May 12 13:38:33.217789 systemd[1]: session-10.scope: Deactivated successfully. May 12 13:38:33.219073 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. May 12 13:38:33.219951 systemd-logind[1500]: Removed session 10. May 12 13:38:38.224863 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:57944.service - OpenSSH per-connection server daemon (10.0.0.1:57944). May 12 13:38:38.269071 sshd[4018]: Accepted publickey for core from 10.0.0.1 port 57944 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:38.270266 sshd-session[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:38.274116 systemd-logind[1500]: New session 11 of user core. May 12 13:38:38.281181 systemd[1]: Started session-11.scope - Session 11 of User core. May 12 13:38:38.388980 sshd[4020]: Connection closed by 10.0.0.1 port 57944 May 12 13:38:38.389654 sshd-session[4018]: pam_unix(sshd:session): session closed for user core May 12 13:38:38.406368 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:57944.service: Deactivated successfully. May 12 13:38:38.407783 systemd[1]: session-11.scope: Deactivated successfully. May 12 13:38:38.408746 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. May 12 13:38:38.410327 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:57954.service - OpenSSH per-connection server daemon (10.0.0.1:57954). May 12 13:38:38.411553 systemd-logind[1500]: Removed session 11. May 12 13:38:38.461769 sshd[4033]: Accepted publickey for core from 10.0.0.1 port 57954 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:38.463036 sshd-session[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:38.467122 systemd-logind[1500]: New session 12 of user core. May 12 13:38:38.476190 systemd[1]: Started session-12.scope - Session 12 of User core. May 12 13:38:38.670212 sshd[4036]: Connection closed by 10.0.0.1 port 57954 May 12 13:38:38.671178 sshd-session[4033]: pam_unix(sshd:session): session closed for user core May 12 13:38:38.682638 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:57954.service: Deactivated successfully. May 12 13:38:38.684271 systemd[1]: session-12.scope: Deactivated successfully. May 12 13:38:38.685866 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. May 12 13:38:38.687832 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:57966.service - OpenSSH per-connection server daemon (10.0.0.1:57966). May 12 13:38:38.688665 systemd-logind[1500]: Removed session 12. May 12 13:38:38.740186 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 57966 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:38.741326 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:38.745415 systemd-logind[1500]: New session 13 of user core. May 12 13:38:38.754130 systemd[1]: Started session-13.scope - Session 13 of User core. May 12 13:38:38.867145 sshd[4050]: Connection closed by 10.0.0.1 port 57966 May 12 13:38:38.867462 sshd-session[4047]: pam_unix(sshd:session): session closed for user core May 12 13:38:38.870699 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:57966.service: Deactivated successfully. May 12 13:38:38.872297 systemd[1]: session-13.scope: Deactivated successfully. May 12 13:38:38.872915 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. May 12 13:38:38.873781 systemd-logind[1500]: Removed session 13. May 12 13:38:43.879395 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:55132.service - OpenSSH per-connection server daemon (10.0.0.1:55132). May 12 13:38:43.934417 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 55132 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:43.935574 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:43.939495 systemd-logind[1500]: New session 14 of user core. May 12 13:38:43.948183 systemd[1]: Started session-14.scope - Session 14 of User core. May 12 13:38:44.061085 sshd[4068]: Connection closed by 10.0.0.1 port 55132 May 12 13:38:44.061212 sshd-session[4066]: pam_unix(sshd:session): session closed for user core May 12 13:38:44.064584 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:55132.service: Deactivated successfully. May 12 13:38:44.067992 systemd[1]: session-14.scope: Deactivated successfully. May 12 13:38:44.068701 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. May 12 13:38:44.069593 systemd-logind[1500]: Removed session 14. May 12 13:38:49.076529 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:55142.service - OpenSSH per-connection server daemon (10.0.0.1:55142). May 12 13:38:49.128852 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 55142 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:49.129966 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:49.133923 systemd-logind[1500]: New session 15 of user core. May 12 13:38:49.144199 systemd[1]: Started session-15.scope - Session 15 of User core. May 12 13:38:49.253926 sshd[4083]: Connection closed by 10.0.0.1 port 55142 May 12 13:38:49.254335 sshd-session[4081]: pam_unix(sshd:session): session closed for user core May 12 13:38:49.265058 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:55142.service: Deactivated successfully. May 12 13:38:49.266426 systemd[1]: session-15.scope: Deactivated successfully. May 12 13:38:49.267179 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. May 12 13:38:49.268765 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:55146.service - OpenSSH per-connection server daemon (10.0.0.1:55146). May 12 13:38:49.269736 systemd-logind[1500]: Removed session 15. May 12 13:38:49.315594 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:49.316691 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:49.320367 systemd-logind[1500]: New session 16 of user core. May 12 13:38:49.330172 systemd[1]: Started session-16.scope - Session 16 of User core. May 12 13:38:49.525758 sshd[4099]: Connection closed by 10.0.0.1 port 55146 May 12 13:38:49.526427 sshd-session[4096]: pam_unix(sshd:session): session closed for user core May 12 13:38:49.535384 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:55146.service: Deactivated successfully. May 12 13:38:49.537166 systemd[1]: session-16.scope: Deactivated successfully. May 12 13:38:49.537837 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. May 12 13:38:49.539543 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:55148.service - OpenSSH per-connection server daemon (10.0.0.1:55148). May 12 13:38:49.540628 systemd-logind[1500]: Removed session 16. May 12 13:38:49.597063 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 55148 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:49.598209 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:49.602795 systemd-logind[1500]: New session 17 of user core. May 12 13:38:49.612170 systemd[1]: Started session-17.scope - Session 17 of User core. May 12 13:38:50.350325 sshd[4113]: Connection closed by 10.0.0.1 port 55148 May 12 13:38:50.350783 sshd-session[4110]: pam_unix(sshd:session): session closed for user core May 12 13:38:50.365145 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:55148.service: Deactivated successfully. May 12 13:38:50.368946 systemd[1]: session-17.scope: Deactivated successfully. May 12 13:38:50.370766 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. May 12 13:38:50.372081 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:55162.service - OpenSSH per-connection server daemon (10.0.0.1:55162). May 12 13:38:50.375013 systemd-logind[1500]: Removed session 17. May 12 13:38:50.431557 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 55162 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:50.432691 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:50.437110 systemd-logind[1500]: New session 18 of user core. May 12 13:38:50.447179 systemd[1]: Started session-18.scope - Session 18 of User core. May 12 13:38:50.655853 sshd[4136]: Connection closed by 10.0.0.1 port 55162 May 12 13:38:50.656684 sshd-session[4133]: pam_unix(sshd:session): session closed for user core May 12 13:38:50.669748 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:55162.service: Deactivated successfully. May 12 13:38:50.672386 systemd[1]: session-18.scope: Deactivated successfully. May 12 13:38:50.673225 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. May 12 13:38:50.675445 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). May 12 13:38:50.676577 systemd-logind[1500]: Removed session 18. May 12 13:38:50.727942 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:50.729176 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:50.733109 systemd-logind[1500]: New session 19 of user core. May 12 13:38:50.751190 systemd[1]: Started session-19.scope - Session 19 of User core. May 12 13:38:50.863429 sshd[4150]: Connection closed by 10.0.0.1 port 55178 May 12 13:38:50.864104 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 12 13:38:50.867332 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. May 12 13:38:50.867662 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:55178.service: Deactivated successfully. May 12 13:38:50.869249 systemd[1]: session-19.scope: Deactivated successfully. May 12 13:38:50.870349 systemd-logind[1500]: Removed session 19. May 12 13:38:55.876991 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:50192.service - OpenSSH per-connection server daemon (10.0.0.1:50192). May 12 13:38:55.928376 sshd[4168]: Accepted publickey for core from 10.0.0.1 port 50192 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:38:55.929489 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:38:55.933692 systemd-logind[1500]: New session 20 of user core. May 12 13:38:55.939186 systemd[1]: Started session-20.scope - Session 20 of User core. May 12 13:38:56.043326 sshd[4170]: Connection closed by 10.0.0.1 port 50192 May 12 13:38:56.044256 sshd-session[4168]: pam_unix(sshd:session): session closed for user core May 12 13:38:56.047481 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:50192.service: Deactivated successfully. May 12 13:38:56.049257 systemd[1]: session-20.scope: Deactivated successfully. May 12 13:38:56.050583 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. May 12 13:38:56.051506 systemd-logind[1500]: Removed session 20. May 12 13:39:01.058897 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:50196.service - OpenSSH per-connection server daemon (10.0.0.1:50196). May 12 13:39:01.113446 sshd[4183]: Accepted publickey for core from 10.0.0.1 port 50196 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:39:01.114713 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:39:01.120215 systemd-logind[1500]: New session 21 of user core. May 12 13:39:01.131680 systemd[1]: Started session-21.scope - Session 21 of User core. May 12 13:39:01.244016 sshd[4187]: Connection closed by 10.0.0.1 port 50196 May 12 13:39:01.243776 sshd-session[4183]: pam_unix(sshd:session): session closed for user core May 12 13:39:01.246764 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:50196.service: Deactivated successfully. May 12 13:39:01.248337 systemd[1]: session-21.scope: Deactivated successfully. May 12 13:39:01.249636 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. May 12 13:39:01.250673 systemd-logind[1500]: Removed session 21. May 12 13:39:06.255559 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:36188.service - OpenSSH per-connection server daemon (10.0.0.1:36188). May 12 13:39:06.309439 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 36188 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:39:06.310504 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:39:06.314106 systemd-logind[1500]: New session 22 of user core. May 12 13:39:06.325218 systemd[1]: Started session-22.scope - Session 22 of User core. May 12 13:39:06.429637 sshd[4202]: Connection closed by 10.0.0.1 port 36188 May 12 13:39:06.430874 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 12 13:39:06.438255 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:36188.service: Deactivated successfully. May 12 13:39:06.439699 systemd[1]: session-22.scope: Deactivated successfully. May 12 13:39:06.440489 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. May 12 13:39:06.442275 systemd[1]: Started sshd@22-10.0.0.120:22-10.0.0.1:36192.service - OpenSSH per-connection server daemon (10.0.0.1:36192). May 12 13:39:06.445460 systemd-logind[1500]: Removed session 22. May 12 13:39:06.494077 sshd[4214]: Accepted publickey for core from 10.0.0.1 port 36192 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:39:06.494907 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:39:06.498701 systemd-logind[1500]: New session 23 of user core. May 12 13:39:06.511247 systemd[1]: Started session-23.scope - Session 23 of User core. May 12 13:39:08.399749 containerd[1525]: time="2025-05-12T13:39:08.399668318Z" level=info msg="StopContainer for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" with timeout 30 (s)" May 12 13:39:08.400448 containerd[1525]: time="2025-05-12T13:39:08.400315912Z" level=info msg="Stop container \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" with signal terminated" May 12 13:39:08.411014 systemd[1]: cri-containerd-886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4.scope: Deactivated successfully. May 12 13:39:08.412510 containerd[1525]: time="2025-05-12T13:39:08.412470691Z" level=info msg="received exit event container_id:\"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" id:\"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" pid:3202 exited_at:{seconds:1747057148 nanos:411580659}" May 12 13:39:08.412646 containerd[1525]: time="2025-05-12T13:39:08.412618730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" id:\"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" pid:3202 exited_at:{seconds:1747057148 nanos:411580659}" May 12 13:39:08.416589 containerd[1525]: time="2025-05-12T13:39:08.416503898Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 12 13:39:08.422734 containerd[1525]: time="2025-05-12T13:39:08.422614087Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" id:\"2937967fc190b51d229575514d6880680565e32d77c0832ec2d32605af8649a8\" pid:4237 exited_at:{seconds:1747057148 nanos:422080332}" May 12 13:39:08.424741 containerd[1525]: time="2025-05-12T13:39:08.424702910Z" level=info msg="StopContainer for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" with timeout 2 (s)" May 12 13:39:08.425076 containerd[1525]: time="2025-05-12T13:39:08.424959028Z" level=info msg="Stop container \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" with signal terminated" May 12 13:39:08.430855 systemd-networkd[1429]: lxc_health: Link DOWN May 12 13:39:08.430861 systemd-networkd[1429]: lxc_health: Lost carrier May 12 13:39:08.437761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4-rootfs.mount: Deactivated successfully. May 12 13:39:08.450111 systemd[1]: cri-containerd-6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f.scope: Deactivated successfully. May 12 13:39:08.452392 systemd[1]: cri-containerd-6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f.scope: Consumed 6.337s CPU time, 123.3M memory peak, 144K read from disk, 12.9M written to disk. May 12 13:39:08.454070 containerd[1525]: time="2025-05-12T13:39:08.453516711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" id:\"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" pid:3286 exited_at:{seconds:1747057148 nanos:453208513}" May 12 13:39:08.454070 containerd[1525]: time="2025-05-12T13:39:08.453616430Z" level=info msg="received exit event container_id:\"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" id:\"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" pid:3286 exited_at:{seconds:1747057148 nanos:453208513}" May 12 13:39:08.457632 containerd[1525]: time="2025-05-12T13:39:08.457599917Z" level=info msg="StopContainer for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" returns successfully" May 12 13:39:08.461063 containerd[1525]: time="2025-05-12T13:39:08.458118952Z" level=info msg="StopPodSandbox for \"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\"" May 12 13:39:08.461340 containerd[1525]: time="2025-05-12T13:39:08.461241766Z" level=info msg="Container to stop \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:39:08.478271 systemd[1]: cri-containerd-8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c.scope: Deactivated successfully. May 12 13:39:08.481465 containerd[1525]: time="2025-05-12T13:39:08.481351400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" id:\"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" pid:2853 exit_status:137 exited_at:{seconds:1747057148 nanos:480790004}" May 12 13:39:08.491704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f-rootfs.mount: Deactivated successfully. May 12 13:39:08.500263 containerd[1525]: time="2025-05-12T13:39:08.500224843Z" level=info msg="StopContainer for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" returns successfully" May 12 13:39:08.500807 containerd[1525]: time="2025-05-12T13:39:08.500778798Z" level=info msg="StopPodSandbox for \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\"" May 12 13:39:08.500897 containerd[1525]: time="2025-05-12T13:39:08.500837958Z" level=info msg="Container to stop \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:39:08.500897 containerd[1525]: time="2025-05-12T13:39:08.500848918Z" level=info msg="Container to stop \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:39:08.500897 containerd[1525]: time="2025-05-12T13:39:08.500858638Z" level=info msg="Container to stop \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:39:08.500897 containerd[1525]: time="2025-05-12T13:39:08.500866918Z" level=info msg="Container to stop \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:39:08.500897 containerd[1525]: time="2025-05-12T13:39:08.500874478Z" level=info msg="Container to stop \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 12 13:39:08.505872 systemd[1]: cri-containerd-98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6.scope: Deactivated successfully. May 12 13:39:08.510946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c-rootfs.mount: Deactivated successfully. May 12 13:39:08.519079 containerd[1525]: time="2025-05-12T13:39:08.518310493Z" level=info msg="shim disconnected" id=8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c namespace=k8s.io May 12 13:39:08.531112 containerd[1525]: time="2025-05-12T13:39:08.518346372Z" level=warning msg="cleaning up after shim disconnected" id=8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c namespace=k8s.io May 12 13:39:08.531112 containerd[1525]: time="2025-05-12T13:39:08.531106987Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 13:39:08.533386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6-rootfs.mount: Deactivated successfully. May 12 13:39:08.535894 containerd[1525]: time="2025-05-12T13:39:08.535857547Z" level=info msg="shim disconnected" id=98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6 namespace=k8s.io May 12 13:39:08.535991 containerd[1525]: time="2025-05-12T13:39:08.535892227Z" level=warning msg="cleaning up after shim disconnected" id=98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6 namespace=k8s.io May 12 13:39:08.535991 containerd[1525]: time="2025-05-12T13:39:08.535948426Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 12 13:39:08.550161 containerd[1525]: time="2025-05-12T13:39:08.549956350Z" level=info msg="TaskExit event in podsandbox handler container_id:\"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" id:\"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" pid:2782 exit_status:137 exited_at:{seconds:1747057148 nanos:512428742}" May 12 13:39:08.550161 containerd[1525]: time="2025-05-12T13:39:08.550088829Z" level=info msg="received exit event sandbox_id:\"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" exit_status:137 exited_at:{seconds:1747057148 nanos:480790004}" May 12 13:39:08.550444 containerd[1525]: time="2025-05-12T13:39:08.550099189Z" level=info msg="received exit event sandbox_id:\"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" exit_status:137 exited_at:{seconds:1747057148 nanos:512428742}" May 12 13:39:08.552214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c-shm.mount: Deactivated successfully. May 12 13:39:08.552357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6-shm.mount: Deactivated successfully. May 12 13:39:08.555682 containerd[1525]: time="2025-05-12T13:39:08.555644063Z" level=info msg="TearDown network for sandbox \"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" successfully" May 12 13:39:08.555682 containerd[1525]: time="2025-05-12T13:39:08.555676543Z" level=info msg="StopPodSandbox for \"8d46f0bf44c755c3f68684babc96efab07d7949fe5348135835cadc3962b402c\" returns successfully" May 12 13:39:08.555800 containerd[1525]: time="2025-05-12T13:39:08.555774382Z" level=info msg="TearDown network for sandbox \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" successfully" May 12 13:39:08.555800 containerd[1525]: time="2025-05-12T13:39:08.555791782Z" level=info msg="StopPodSandbox for \"98bb00bd2c384a8b4916f7155b80b66727bb858326a15fdcf39ba4e7189e9ca6\" returns successfully" May 12 13:39:08.702750 kubelet[2626]: I0512 13:39:08.702591 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-xtables-lock\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.702750 kubelet[2626]: I0512 13:39:08.702648 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-cilium-config-path\") pod \"38ba2b40-3686-440e-b70b-6c2b9d4a4bc6\" (UID: \"38ba2b40-3686-440e-b70b-6c2b9d4a4bc6\") " May 12 13:39:08.702750 kubelet[2626]: I0512 13:39:08.702696 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5626306d-44f4-4250-8599-fda8cfda8403-clustermesh-secrets\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.702750 kubelet[2626]: I0512 13:39:08.702715 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9sfw\" (UniqueName: \"kubernetes.io/projected/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-kube-api-access-b9sfw\") pod \"38ba2b40-3686-440e-b70b-6c2b9d4a4bc6\" (UID: \"38ba2b40-3686-440e-b70b-6c2b9d4a4bc6\") " May 12 13:39:08.702750 kubelet[2626]: I0512 13:39:08.702735 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-hubble-tls\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.702750 kubelet[2626]: I0512 13:39:08.702750 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-cgroup\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703247 kubelet[2626]: I0512 13:39:08.702764 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-run\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703247 kubelet[2626]: I0512 13:39:08.702779 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-net\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703247 kubelet[2626]: I0512 13:39:08.702795 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wl6f\" (UniqueName: \"kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-kube-api-access-2wl6f\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703247 kubelet[2626]: I0512 13:39:08.702811 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-lib-modules\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703247 kubelet[2626]: I0512 13:39:08.702827 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5626306d-44f4-4250-8599-fda8cfda8403-cilium-config-path\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703247 kubelet[2626]: I0512 13:39:08.702864 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-kernel\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703451 kubelet[2626]: I0512 13:39:08.702879 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cni-path\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703451 kubelet[2626]: I0512 13:39:08.702895 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-etc-cni-netd\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703451 kubelet[2626]: I0512 13:39:08.702912 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-bpf-maps\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.703451 kubelet[2626]: I0512 13:39:08.702925 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-hostproc\") pod \"5626306d-44f4-4250-8599-fda8cfda8403\" (UID: \"5626306d-44f4-4250-8599-fda8cfda8403\") " May 12 13:39:08.704206 kubelet[2626]: I0512 13:39:08.704173 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-hostproc" (OuterVolumeSpecName: "hostproc") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.704604 kubelet[2626]: I0512 13:39:08.704313 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.704604 kubelet[2626]: I0512 13:39:08.704357 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.705876 kubelet[2626]: I0512 13:39:08.705833 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38ba2b40-3686-440e-b70b-6c2b9d4a4bc6" (UID: "38ba2b40-3686-440e-b70b-6c2b9d4a4bc6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 12 13:39:08.705948 kubelet[2626]: I0512 13:39:08.705889 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.705948 kubelet[2626]: I0512 13:39:08.705907 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.705948 kubelet[2626]: I0512 13:39:08.705921 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.707249 kubelet[2626]: I0512 13:39:08.707202 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-kube-api-access-b9sfw" (OuterVolumeSpecName: "kube-api-access-b9sfw") pod "38ba2b40-3686-440e-b70b-6c2b9d4a4bc6" (UID: "38ba2b40-3686-440e-b70b-6c2b9d4a4bc6"). InnerVolumeSpecName "kube-api-access-b9sfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 12 13:39:08.707883 kubelet[2626]: I0512 13:39:08.707845 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-kube-api-access-2wl6f" (OuterVolumeSpecName: "kube-api-access-2wl6f") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "kube-api-access-2wl6f". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 12 13:39:08.707952 kubelet[2626]: I0512 13:39:08.707896 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cni-path" (OuterVolumeSpecName: "cni-path") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.707952 kubelet[2626]: I0512 13:39:08.707903 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 12 13:39:08.707952 kubelet[2626]: I0512 13:39:08.707928 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.707952 kubelet[2626]: I0512 13:39:08.707915 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.707952 kubelet[2626]: I0512 13:39:08.707947 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 12 13:39:08.708894 kubelet[2626]: I0512 13:39:08.708849 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5626306d-44f4-4250-8599-fda8cfda8403-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 12 13:39:08.710358 kubelet[2626]: I0512 13:39:08.710324 2626 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5626306d-44f4-4250-8599-fda8cfda8403-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5626306d-44f4-4250-8599-fda8cfda8403" (UID: "5626306d-44f4-4250-8599-fda8cfda8403"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 12 13:39:08.738801 kubelet[2626]: E0512 13:39:08.738769 2626 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803165 2626 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803197 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803208 2626 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5626306d-44f4-4250-8599-fda8cfda8403-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803217 2626 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9sfw\" (UniqueName: \"kubernetes.io/projected/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6-kube-api-access-b9sfw\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803224 2626 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803233 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803240 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cilium-run\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803334 kubelet[2626]: I0512 13:39:08.803249 2626 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803256 2626 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2wl6f\" (UniqueName: \"kubernetes.io/projected/5626306d-44f4-4250-8599-fda8cfda8403-kube-api-access-2wl6f\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803264 2626 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-lib-modules\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803271 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5626306d-44f4-4250-8599-fda8cfda8403-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803279 2626 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803287 2626 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-cni-path\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803296 2626 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803309 2626 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.803566 kubelet[2626]: I0512 13:39:08.803316 2626 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5626306d-44f4-4250-8599-fda8cfda8403-hostproc\") on node \"localhost\" DevicePath \"\"" May 12 13:39:08.886739 kubelet[2626]: I0512 13:39:08.886434 2626 scope.go:117] "RemoveContainer" containerID="886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4" May 12 13:39:08.888894 containerd[1525]: time="2025-05-12T13:39:08.888858777Z" level=info msg="RemoveContainer for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\"" May 12 13:39:08.890382 systemd[1]: Removed slice kubepods-besteffort-pod38ba2b40_3686_440e_b70b_6c2b9d4a4bc6.slice - libcontainer container kubepods-besteffort-pod38ba2b40_3686_440e_b70b_6c2b9d4a4bc6.slice. May 12 13:39:08.894911 systemd[1]: Removed slice kubepods-burstable-pod5626306d_44f4_4250_8599_fda8cfda8403.slice - libcontainer container kubepods-burstable-pod5626306d_44f4_4250_8599_fda8cfda8403.slice. May 12 13:39:08.894996 systemd[1]: kubepods-burstable-pod5626306d_44f4_4250_8599_fda8cfda8403.slice: Consumed 6.505s CPU time, 123.6M memory peak, 156K read from disk, 16.1M written to disk. May 12 13:39:08.908792 containerd[1525]: time="2025-05-12T13:39:08.908741012Z" level=info msg="RemoveContainer for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" returns successfully" May 12 13:39:08.910242 kubelet[2626]: I0512 13:39:08.910117 2626 scope.go:117] "RemoveContainer" containerID="886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4" May 12 13:39:08.910694 containerd[1525]: time="2025-05-12T13:39:08.910510357Z" level=error msg="ContainerStatus for \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\": not found" May 12 13:39:08.916101 kubelet[2626]: E0512 13:39:08.915900 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\": not found" containerID="886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4" May 12 13:39:08.916101 kubelet[2626]: I0512 13:39:08.915934 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4"} err="failed to get container status \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\": rpc error: code = NotFound desc = an error occurred when try to find container \"886916e1cab54caac8d64c39e577500189f974fbac9d3b7504b327a9c6aabde4\": not found" May 12 13:39:08.916101 kubelet[2626]: I0512 13:39:08.916008 2626 scope.go:117] "RemoveContainer" containerID="6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f" May 12 13:39:08.917742 containerd[1525]: time="2025-05-12T13:39:08.917716578Z" level=info msg="RemoveContainer for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\"" May 12 13:39:08.921258 containerd[1525]: time="2025-05-12T13:39:08.921223268Z" level=info msg="RemoveContainer for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" returns successfully" May 12 13:39:08.921430 kubelet[2626]: I0512 13:39:08.921398 2626 scope.go:117] "RemoveContainer" containerID="c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f" May 12 13:39:08.922614 containerd[1525]: time="2025-05-12T13:39:08.922591377Z" level=info msg="RemoveContainer for \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\"" May 12 13:39:08.925933 containerd[1525]: time="2025-05-12T13:39:08.925899710Z" level=info msg="RemoveContainer for \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" returns successfully" May 12 13:39:08.926184 kubelet[2626]: I0512 13:39:08.926091 2626 scope.go:117] "RemoveContainer" containerID="b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595" May 12 13:39:08.928196 containerd[1525]: time="2025-05-12T13:39:08.928169811Z" level=info msg="RemoveContainer for \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\"" May 12 13:39:08.931362 containerd[1525]: time="2025-05-12T13:39:08.931326305Z" level=info msg="RemoveContainer for \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" returns successfully" May 12 13:39:08.931567 kubelet[2626]: I0512 13:39:08.931478 2626 scope.go:117] "RemoveContainer" containerID="df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0" May 12 13:39:08.932824 containerd[1525]: time="2025-05-12T13:39:08.932800572Z" level=info msg="RemoveContainer for \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\"" May 12 13:39:08.935279 containerd[1525]: time="2025-05-12T13:39:08.935241232Z" level=info msg="RemoveContainer for \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" returns successfully" May 12 13:39:08.935486 kubelet[2626]: I0512 13:39:08.935398 2626 scope.go:117] "RemoveContainer" containerID="d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6" May 12 13:39:08.936850 containerd[1525]: time="2025-05-12T13:39:08.936821619Z" level=info msg="RemoveContainer for \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\"" May 12 13:39:08.939440 containerd[1525]: time="2025-05-12T13:39:08.939410917Z" level=info msg="RemoveContainer for \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" returns successfully" May 12 13:39:08.939641 kubelet[2626]: I0512 13:39:08.939554 2626 scope.go:117] "RemoveContainer" containerID="6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f" May 12 13:39:08.939829 containerd[1525]: time="2025-05-12T13:39:08.939800514Z" level=error msg="ContainerStatus for \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\": not found" May 12 13:39:08.939960 kubelet[2626]: E0512 13:39:08.939936 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\": not found" containerID="6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f" May 12 13:39:08.940005 kubelet[2626]: I0512 13:39:08.939984 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f"} err="failed to get container status \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6310830ce0b984ecd03291e7b333fdc288d049570b2cef9be64fec41b993747f\": not found" May 12 13:39:08.940088 kubelet[2626]: I0512 13:39:08.940007 2626 scope.go:117] "RemoveContainer" containerID="c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f" May 12 13:39:08.940191 containerd[1525]: time="2025-05-12T13:39:08.940162991Z" level=error msg="ContainerStatus for \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\": not found" May 12 13:39:08.940293 kubelet[2626]: E0512 13:39:08.940271 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\": not found" containerID="c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f" May 12 13:39:08.940344 kubelet[2626]: I0512 13:39:08.940298 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f"} err="failed to get container status \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c88731d7f6e8c97c1d61ac5c6a693fbb909c2624a98cd809da3f48619973380f\": not found" May 12 13:39:08.940344 kubelet[2626]: I0512 13:39:08.940331 2626 scope.go:117] "RemoveContainer" containerID="b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595" May 12 13:39:08.940499 containerd[1525]: time="2025-05-12T13:39:08.940470229Z" level=error msg="ContainerStatus for \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\": not found" May 12 13:39:08.940592 kubelet[2626]: E0512 13:39:08.940574 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\": not found" containerID="b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595" May 12 13:39:08.940650 kubelet[2626]: I0512 13:39:08.940597 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595"} err="failed to get container status \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\": rpc error: code = NotFound desc = an error occurred when try to find container \"b492e1eedfe51a62f3e316a1299efec874fcc3548d95168b205ad2f0b11b2595\": not found" May 12 13:39:08.940650 kubelet[2626]: I0512 13:39:08.940627 2626 scope.go:117] "RemoveContainer" containerID="df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0" May 12 13:39:08.940775 containerd[1525]: time="2025-05-12T13:39:08.940752546Z" level=error msg="ContainerStatus for \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\": not found" May 12 13:39:08.940896 kubelet[2626]: E0512 13:39:08.940879 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\": not found" containerID="df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0" May 12 13:39:08.940933 kubelet[2626]: I0512 13:39:08.940899 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0"} err="failed to get container status \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\": rpc error: code = NotFound desc = an error occurred when try to find container \"df087537a329ac96617f03115ceb49de7cf9a268dd0775477c7854e2b3433fd0\": not found" May 12 13:39:08.940933 kubelet[2626]: I0512 13:39:08.940913 2626 scope.go:117] "RemoveContainer" containerID="d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6" May 12 13:39:08.941085 containerd[1525]: time="2025-05-12T13:39:08.941025304Z" level=error msg="ContainerStatus for \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\": not found" May 12 13:39:08.941157 kubelet[2626]: E0512 13:39:08.941133 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\": not found" containerID="d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6" May 12 13:39:08.941187 kubelet[2626]: I0512 13:39:08.941159 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6"} err="failed to get container status \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9e7b3bbfc171b0549e3c499a477ac7ecc3a6f98a7ab36e4ae9b74b2d33e58b6\": not found" May 12 13:39:09.436316 systemd[1]: var-lib-kubelet-pods-38ba2b40\x2d3686\x2d440e\x2db70b\x2d6c2b9d4a4bc6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db9sfw.mount: Deactivated successfully. May 12 13:39:09.436418 systemd[1]: var-lib-kubelet-pods-5626306d\x2d44f4\x2d4250\x2d8599\x2dfda8cfda8403-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2wl6f.mount: Deactivated successfully. May 12 13:39:09.436468 systemd[1]: var-lib-kubelet-pods-5626306d\x2d44f4\x2d4250\x2d8599\x2dfda8cfda8403-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 12 13:39:09.436524 systemd[1]: var-lib-kubelet-pods-5626306d\x2d44f4\x2d4250\x2d8599\x2dfda8cfda8403-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 12 13:39:09.705960 kubelet[2626]: I0512 13:39:09.705851 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38ba2b40-3686-440e-b70b-6c2b9d4a4bc6" path="/var/lib/kubelet/pods/38ba2b40-3686-440e-b70b-6c2b9d4a4bc6/volumes" May 12 13:39:09.706275 kubelet[2626]: I0512 13:39:09.706246 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5626306d-44f4-4250-8599-fda8cfda8403" path="/var/lib/kubelet/pods/5626306d-44f4-4250-8599-fda8cfda8403/volumes" May 12 13:39:10.357903 sshd[4217]: Connection closed by 10.0.0.1 port 36192 May 12 13:39:10.358410 sshd-session[4214]: pam_unix(sshd:session): session closed for user core May 12 13:39:10.375381 systemd[1]: sshd@22-10.0.0.120:22-10.0.0.1:36192.service: Deactivated successfully. May 12 13:39:10.377186 systemd[1]: session-23.scope: Deactivated successfully. May 12 13:39:10.377389 systemd[1]: session-23.scope: Consumed 1.227s CPU time, 25.5M memory peak. May 12 13:39:10.377836 systemd-logind[1500]: Session 23 logged out. Waiting for processes to exit. May 12 13:39:10.379572 systemd[1]: Started sshd@23-10.0.0.120:22-10.0.0.1:36206.service - OpenSSH per-connection server daemon (10.0.0.1:36206). May 12 13:39:10.380620 systemd-logind[1500]: Removed session 23. May 12 13:39:10.435865 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 36206 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:39:10.437101 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:39:10.441102 systemd-logind[1500]: New session 24 of user core. May 12 13:39:10.448171 systemd[1]: Started session-24.scope - Session 24 of User core. May 12 13:39:10.699986 kubelet[2626]: E0512 13:39:10.699527 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:11.233681 sshd[4365]: Connection closed by 10.0.0.1 port 36206 May 12 13:39:11.234172 sshd-session[4362]: pam_unix(sshd:session): session closed for user core May 12 13:39:11.244500 systemd[1]: sshd@23-10.0.0.120:22-10.0.0.1:36206.service: Deactivated successfully. May 12 13:39:11.250336 systemd[1]: session-24.scope: Deactivated successfully. May 12 13:39:11.250447 kubelet[2626]: I0512 13:39:11.250360 2626 memory_manager.go:355] "RemoveStaleState removing state" podUID="38ba2b40-3686-440e-b70b-6c2b9d4a4bc6" containerName="cilium-operator" May 12 13:39:11.250447 kubelet[2626]: I0512 13:39:11.250385 2626 memory_manager.go:355] "RemoveStaleState removing state" podUID="5626306d-44f4-4250-8599-fda8cfda8403" containerName="cilium-agent" May 12 13:39:11.254629 systemd-logind[1500]: Session 24 logged out. Waiting for processes to exit. May 12 13:39:11.256712 kubelet[2626]: W0512 13:39:11.256645 2626 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 12 13:39:11.257103 kubelet[2626]: E0512 13:39:11.256711 2626 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 12 13:39:11.257103 kubelet[2626]: W0512 13:39:11.256650 2626 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 12 13:39:11.257103 kubelet[2626]: E0512 13:39:11.256744 2626 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 12 13:39:11.257103 kubelet[2626]: W0512 13:39:11.256998 2626 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 12 13:39:11.257103 kubelet[2626]: E0512 13:39:11.257020 2626 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 12 13:39:11.258160 systemd-logind[1500]: Removed session 24. May 12 13:39:11.259105 kubelet[2626]: I0512 13:39:11.259070 2626 status_manager.go:890] "Failed to get status for pod" podUID="6c312cae-441a-43cb-a4f5-40341fe3b4de" pod="kube-system/cilium-822dg" err="pods \"cilium-822dg\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" May 12 13:39:11.260344 systemd[1]: Started sshd@24-10.0.0.120:22-10.0.0.1:36222.service - OpenSSH per-connection server daemon (10.0.0.1:36222). May 12 13:39:11.272853 systemd[1]: Created slice kubepods-burstable-pod6c312cae_441a_43cb_a4f5_40341fe3b4de.slice - libcontainer container kubepods-burstable-pod6c312cae_441a_43cb_a4f5_40341fe3b4de.slice. May 12 13:39:11.313915 kubelet[2626]: I0512 13:39:11.313884 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-cni-path\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.313915 kubelet[2626]: I0512 13:39:11.313921 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-xtables-lock\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.313915 kubelet[2626]: I0512 13:39:11.313940 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6c312cae-441a-43cb-a4f5-40341fe3b4de-clustermesh-secrets\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.313915 kubelet[2626]: I0512 13:39:11.313970 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-bpf-maps\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314276 kubelet[2626]: I0512 13:39:11.313988 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxj9\" (UniqueName: \"kubernetes.io/projected/6c312cae-441a-43cb-a4f5-40341fe3b4de-kube-api-access-8fxj9\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314276 kubelet[2626]: I0512 13:39:11.314015 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-cilium-run\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314276 kubelet[2626]: I0512 13:39:11.314032 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6c312cae-441a-43cb-a4f5-40341fe3b4de-hubble-tls\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314276 kubelet[2626]: I0512 13:39:11.314180 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6c312cae-441a-43cb-a4f5-40341fe3b4de-cilium-config-path\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314582 kubelet[2626]: I0512 13:39:11.314398 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-host-proc-sys-net\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314582 kubelet[2626]: I0512 13:39:11.314423 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-host-proc-sys-kernel\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314582 kubelet[2626]: I0512 13:39:11.314453 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-hostproc\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314582 kubelet[2626]: I0512 13:39:11.314470 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-cilium-cgroup\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314582 kubelet[2626]: I0512 13:39:11.314485 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-etc-cni-netd\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314582 kubelet[2626]: I0512 13:39:11.314500 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6c312cae-441a-43cb-a4f5-40341fe3b4de-cilium-ipsec-secrets\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.314740 kubelet[2626]: I0512 13:39:11.314517 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c312cae-441a-43cb-a4f5-40341fe3b4de-lib-modules\") pod \"cilium-822dg\" (UID: \"6c312cae-441a-43cb-a4f5-40341fe3b4de\") " pod="kube-system/cilium-822dg" May 12 13:39:11.317234 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 36222 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:39:11.318341 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:39:11.322253 systemd-logind[1500]: New session 25 of user core. May 12 13:39:11.336204 systemd[1]: Started session-25.scope - Session 25 of User core. May 12 13:39:11.385752 sshd[4379]: Connection closed by 10.0.0.1 port 36222 May 12 13:39:11.386140 sshd-session[4376]: pam_unix(sshd:session): session closed for user core May 12 13:39:11.398482 systemd[1]: sshd@24-10.0.0.120:22-10.0.0.1:36222.service: Deactivated successfully. May 12 13:39:11.401371 systemd[1]: session-25.scope: Deactivated successfully. May 12 13:39:11.402259 systemd-logind[1500]: Session 25 logged out. Waiting for processes to exit. May 12 13:39:11.404555 systemd[1]: Started sshd@25-10.0.0.120:22-10.0.0.1:36228.service - OpenSSH per-connection server daemon (10.0.0.1:36228). May 12 13:39:11.405161 systemd-logind[1500]: Removed session 25. May 12 13:39:11.457668 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 36228 ssh2: RSA SHA256:jEPoW5jmVqQGUqKP3XswdpHQkuwhsPJWJAB8YbEjhZ8 May 12 13:39:11.458973 sshd-session[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 12 13:39:11.463113 systemd-logind[1500]: New session 26 of user core. May 12 13:39:11.475182 systemd[1]: Started session-26.scope - Session 26 of User core. May 12 13:39:12.415373 kubelet[2626]: E0512 13:39:12.415332 2626 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 12 13:39:12.415681 kubelet[2626]: E0512 13:39:12.415407 2626 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c312cae-441a-43cb-a4f5-40341fe3b4de-clustermesh-secrets podName:6c312cae-441a-43cb-a4f5-40341fe3b4de nodeName:}" failed. No retries permitted until 2025-05-12 13:39:12.915387445 +0000 UTC m=+79.298062072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/6c312cae-441a-43cb-a4f5-40341fe3b4de-clustermesh-secrets") pod "cilium-822dg" (UID: "6c312cae-441a-43cb-a4f5-40341fe3b4de") : failed to sync secret cache: timed out waiting for the condition May 12 13:39:12.415681 kubelet[2626]: E0512 13:39:12.415338 2626 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 12 13:39:12.417339 kubelet[2626]: E0512 13:39:12.415431 2626 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-822dg: failed to sync secret cache: timed out waiting for the condition May 12 13:39:12.417339 kubelet[2626]: E0512 13:39:12.417309 2626 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6c312cae-441a-43cb-a4f5-40341fe3b4de-hubble-tls podName:6c312cae-441a-43cb-a4f5-40341fe3b4de nodeName:}" failed. No retries permitted until 2025-05-12 13:39:12.917292956 +0000 UTC m=+79.299967583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/6c312cae-441a-43cb-a4f5-40341fe3b4de-hubble-tls") pod "cilium-822dg" (UID: "6c312cae-441a-43cb-a4f5-40341fe3b4de") : failed to sync secret cache: timed out waiting for the condition May 12 13:39:13.079710 kubelet[2626]: E0512 13:39:13.079640 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:13.081001 containerd[1525]: time="2025-05-12T13:39:13.080955829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-822dg,Uid:6c312cae-441a-43cb-a4f5-40341fe3b4de,Namespace:kube-system,Attempt:0,}" May 12 13:39:13.095995 containerd[1525]: time="2025-05-12T13:39:13.095953137Z" level=info msg="connecting to shim dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb" address="unix:///run/containerd/s/cd6cdbf4328989d816c18753dba0b994f02dfa9a2a841176919655d2e08fd562" namespace=k8s.io protocol=ttrpc version=3 May 12 13:39:13.120186 systemd[1]: Started cri-containerd-dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb.scope - libcontainer container dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb. May 12 13:39:13.141192 containerd[1525]: time="2025-05-12T13:39:13.141148459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-822dg,Uid:6c312cae-441a-43cb-a4f5-40341fe3b4de,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\"" May 12 13:39:13.141794 kubelet[2626]: E0512 13:39:13.141769 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:13.143718 containerd[1525]: time="2025-05-12T13:39:13.143679490Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 12 13:39:13.150262 containerd[1525]: time="2025-05-12T13:39:13.150226427Z" level=info msg="Container 1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5: CDI devices from CRI Config.CDIDevices: []" May 12 13:39:13.157618 containerd[1525]: time="2025-05-12T13:39:13.157562801Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\"" May 12 13:39:13.158785 containerd[1525]: time="2025-05-12T13:39:13.158754317Z" level=info msg="StartContainer for \"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\"" May 12 13:39:13.159739 containerd[1525]: time="2025-05-12T13:39:13.159700754Z" level=info msg="connecting to shim 1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5" address="unix:///run/containerd/s/cd6cdbf4328989d816c18753dba0b994f02dfa9a2a841176919655d2e08fd562" protocol=ttrpc version=3 May 12 13:39:13.177217 systemd[1]: Started cri-containerd-1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5.scope - libcontainer container 1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5. May 12 13:39:13.202263 containerd[1525]: time="2025-05-12T13:39:13.202224285Z" level=info msg="StartContainer for \"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\" returns successfully" May 12 13:39:13.213418 systemd[1]: cri-containerd-1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5.scope: Deactivated successfully. May 12 13:39:13.216833 containerd[1525]: time="2025-05-12T13:39:13.216790514Z" level=info msg="received exit event container_id:\"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\" id:\"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\" pid:4457 exited_at:{seconds:1747057153 nanos:216558795}" May 12 13:39:13.216914 containerd[1525]: time="2025-05-12T13:39:13.216890554Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\" id:\"1de9c93f3eb9fda26b029a30a5007561f7ca8f44fa61f497601455fd8807f2b5\" pid:4457 exited_at:{seconds:1747057153 nanos:216558795}" May 12 13:39:13.739655 kubelet[2626]: E0512 13:39:13.739617 2626 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 12 13:39:13.901161 kubelet[2626]: E0512 13:39:13.901125 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:13.904777 containerd[1525]: time="2025-05-12T13:39:13.904735588Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 12 13:39:13.912309 containerd[1525]: time="2025-05-12T13:39:13.912266122Z" level=info msg="Container 60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21: CDI devices from CRI Config.CDIDevices: []" May 12 13:39:13.919871 containerd[1525]: time="2025-05-12T13:39:13.919816376Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\"" May 12 13:39:13.920619 containerd[1525]: time="2025-05-12T13:39:13.920580293Z" level=info msg="StartContainer for \"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\"" May 12 13:39:13.921416 containerd[1525]: time="2025-05-12T13:39:13.921390970Z" level=info msg="connecting to shim 60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21" address="unix:///run/containerd/s/cd6cdbf4328989d816c18753dba0b994f02dfa9a2a841176919655d2e08fd562" protocol=ttrpc version=3 May 12 13:39:13.941188 systemd[1]: Started cri-containerd-60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21.scope - libcontainer container 60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21. May 12 13:39:13.967310 containerd[1525]: time="2025-05-12T13:39:13.967267210Z" level=info msg="StartContainer for \"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\" returns successfully" May 12 13:39:13.974449 systemd[1]: cri-containerd-60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21.scope: Deactivated successfully. May 12 13:39:13.979394 containerd[1525]: time="2025-05-12T13:39:13.979247768Z" level=info msg="received exit event container_id:\"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\" id:\"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\" pid:4503 exited_at:{seconds:1747057153 nanos:978808329}" May 12 13:39:13.979394 containerd[1525]: time="2025-05-12T13:39:13.979362368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\" id:\"60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21\" pid:4503 exited_at:{seconds:1747057153 nanos:978808329}" May 12 13:39:13.994826 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60982b6013963c0429bc80c857d732c925056d1d54d1669a3c273f19efb53b21-rootfs.mount: Deactivated successfully. May 12 13:39:14.905010 kubelet[2626]: E0512 13:39:14.904978 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:14.907321 containerd[1525]: time="2025-05-12T13:39:14.907289274Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 12 13:39:14.914249 containerd[1525]: time="2025-05-12T13:39:14.914207695Z" level=info msg="Container 38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d: CDI devices from CRI Config.CDIDevices: []" May 12 13:39:14.922143 containerd[1525]: time="2025-05-12T13:39:14.922097075Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\"" May 12 13:39:14.922595 containerd[1525]: time="2025-05-12T13:39:14.922537434Z" level=info msg="StartContainer for \"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\"" May 12 13:39:14.926495 containerd[1525]: time="2025-05-12T13:39:14.925256546Z" level=info msg="connecting to shim 38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d" address="unix:///run/containerd/s/cd6cdbf4328989d816c18753dba0b994f02dfa9a2a841176919655d2e08fd562" protocol=ttrpc version=3 May 12 13:39:14.974192 systemd[1]: Started cri-containerd-38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d.scope - libcontainer container 38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d. May 12 13:39:15.016215 systemd[1]: cri-containerd-38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d.scope: Deactivated successfully. May 12 13:39:15.017124 containerd[1525]: time="2025-05-12T13:39:15.017086119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\" id:\"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\" pid:4547 exited_at:{seconds:1747057155 nanos:16765800}" May 12 13:39:15.017190 containerd[1525]: time="2025-05-12T13:39:15.017163759Z" level=info msg="received exit event container_id:\"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\" id:\"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\" pid:4547 exited_at:{seconds:1747057155 nanos:16765800}" May 12 13:39:15.018355 containerd[1525]: time="2025-05-12T13:39:15.018324037Z" level=info msg="StartContainer for \"38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d\" returns successfully" May 12 13:39:15.037843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38e0884a2aa3e73a1d9e0c969a95d750dd7ca3338c2a08d080956b58c995871d-rootfs.mount: Deactivated successfully. May 12 13:39:15.263270 kubelet[2626]: I0512 13:39:15.263154 2626 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-12T13:39:15Z","lastTransitionTime":"2025-05-12T13:39:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 12 13:39:15.910851 kubelet[2626]: E0512 13:39:15.910765 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:15.914076 containerd[1525]: time="2025-05-12T13:39:15.913801684Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 12 13:39:15.919364 containerd[1525]: time="2025-05-12T13:39:15.919326194Z" level=info msg="Container 3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd: CDI devices from CRI Config.CDIDevices: []" May 12 13:39:15.929225 containerd[1525]: time="2025-05-12T13:39:15.929185856Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\"" May 12 13:39:15.929641 containerd[1525]: time="2025-05-12T13:39:15.929620456Z" level=info msg="StartContainer for \"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\"" May 12 13:39:15.930494 containerd[1525]: time="2025-05-12T13:39:15.930436734Z" level=info msg="connecting to shim 3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd" address="unix:///run/containerd/s/cd6cdbf4328989d816c18753dba0b994f02dfa9a2a841176919655d2e08fd562" protocol=ttrpc version=3 May 12 13:39:15.951262 systemd[1]: Started cri-containerd-3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd.scope - libcontainer container 3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd. May 12 13:39:15.972266 systemd[1]: cri-containerd-3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd.scope: Deactivated successfully. May 12 13:39:15.975016 containerd[1525]: time="2025-05-12T13:39:15.974984455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\" id:\"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\" pid:4587 exited_at:{seconds:1747057155 nanos:974634456}" May 12 13:39:15.975419 containerd[1525]: time="2025-05-12T13:39:15.975383294Z" level=info msg="received exit event container_id:\"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\" id:\"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\" pid:4587 exited_at:{seconds:1747057155 nanos:974634456}" May 12 13:39:15.982052 containerd[1525]: time="2025-05-12T13:39:15.982008002Z" level=info msg="StartContainer for \"3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd\" returns successfully" May 12 13:39:15.991614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f14f200eab4de4df84f4306ee2ce083f26b509f13df0d99d0cc4cee98c3a9fd-rootfs.mount: Deactivated successfully. May 12 13:39:16.915093 kubelet[2626]: E0512 13:39:16.915028 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:16.919031 containerd[1525]: time="2025-05-12T13:39:16.918354728Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 12 13:39:16.928471 containerd[1525]: time="2025-05-12T13:39:16.928149279Z" level=info msg="Container 0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf: CDI devices from CRI Config.CDIDevices: []" May 12 13:39:16.938563 containerd[1525]: time="2025-05-12T13:39:16.938519429Z" level=info msg="CreateContainer within sandbox \"dc499c9fb087f7eeb6a8216d944de3913c7de15fafa3f17b2d6a769b7067cefb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\"" May 12 13:39:16.938950 containerd[1525]: time="2025-05-12T13:39:16.938925348Z" level=info msg="StartContainer for \"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\"" May 12 13:39:16.939795 containerd[1525]: time="2025-05-12T13:39:16.939763867Z" level=info msg="connecting to shim 0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf" address="unix:///run/containerd/s/cd6cdbf4328989d816c18753dba0b994f02dfa9a2a841176919655d2e08fd562" protocol=ttrpc version=3 May 12 13:39:16.959195 systemd[1]: Started cri-containerd-0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf.scope - libcontainer container 0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf. May 12 13:39:16.995174 containerd[1525]: time="2025-05-12T13:39:16.995137614Z" level=info msg="StartContainer for \"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\" returns successfully" May 12 13:39:17.048413 containerd[1525]: time="2025-05-12T13:39:17.048270041Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\" id:\"fac55bb3bce5cecf58bd19b487a82c030f180077bb18a06b927837f1dd54cc19\" pid:4655 exited_at:{seconds:1747057157 nanos:47766401}" May 12 13:39:17.239521 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 12 13:39:17.921126 kubelet[2626]: E0512 13:39:17.921092 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:17.935288 kubelet[2626]: I0512 13:39:17.935225 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-822dg" podStartSLOduration=6.9352093329999995 podStartE2EDuration="6.935209333s" podCreationTimestamp="2025-05-12 13:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-12 13:39:17.934460013 +0000 UTC m=+84.317134640" watchObservedRunningTime="2025-05-12 13:39:17.935209333 +0000 UTC m=+84.317883960" May 12 13:39:18.699800 kubelet[2626]: E0512 13:39:18.699746 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:19.080510 kubelet[2626]: E0512 13:39:19.080414 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:20.005238 containerd[1525]: time="2025-05-12T13:39:20.005118473Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\" id:\"7b79180b7da6c700a5891e7c798f2bf233dd687bbfe36217ac372d5c8eff5948\" pid:5113 exit_status:1 exited_at:{seconds:1747057160 nanos:4662392}" May 12 13:39:20.046569 systemd-networkd[1429]: lxc_health: Link UP May 12 13:39:20.056115 systemd-networkd[1429]: lxc_health: Gained carrier May 12 13:39:21.081678 kubelet[2626]: E0512 13:39:21.081526 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:21.562345 systemd-networkd[1429]: lxc_health: Gained IPv6LL May 12 13:39:21.928153 kubelet[2626]: E0512 13:39:21.928054 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:22.135765 containerd[1525]: time="2025-05-12T13:39:22.135564362Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\" id:\"5ae73b81f94ae00a9f1b298e40d26b42b2699b683438c136f4cc6804a7c9b4af\" pid:5196 exited_at:{seconds:1747057162 nanos:135256880}" May 12 13:39:22.930401 kubelet[2626]: E0512 13:39:22.930371 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:23.702052 kubelet[2626]: E0512 13:39:23.701974 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 12 13:39:24.241077 containerd[1525]: time="2025-05-12T13:39:24.240842249Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\" id:\"70751154424e3b12c05b03253544de2e204b8d068a7ac4f8dcdb4b4e098c9877\" pid:5230 exited_at:{seconds:1747057164 nanos:240578647}" May 12 13:39:26.363712 containerd[1525]: time="2025-05-12T13:39:26.363162708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a8d91854b4235cb75f2f174cc86376b4d29e8eef074753d9230cc80241d26cf\" id:\"8036f47ab1f8e0e6eb8f97fa88ceb2f919614accf15fd12aaf2f0e41be9d5f81\" pid:5255 exited_at:{seconds:1747057166 nanos:362558905}" May 12 13:39:26.368527 sshd[4389]: Connection closed by 10.0.0.1 port 36228 May 12 13:39:26.369027 sshd-session[4385]: pam_unix(sshd:session): session closed for user core May 12 13:39:26.372995 systemd-logind[1500]: Session 26 logged out. Waiting for processes to exit. May 12 13:39:26.373242 systemd[1]: sshd@25-10.0.0.120:22-10.0.0.1:36228.service: Deactivated successfully. May 12 13:39:26.374922 systemd[1]: session-26.scope: Deactivated successfully. May 12 13:39:26.375737 systemd-logind[1500]: Removed session 26. May 12 13:39:27.700102 kubelet[2626]: E0512 13:39:27.699706 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"