May 15 23:25:29.917849 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 23:25:29.917879 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Thu May 15 22:10:19 -00 2025 May 15 23:25:29.917889 kernel: KASLR enabled May 15 23:25:29.917894 kernel: efi: EFI v2.7 by EDK II May 15 23:25:29.917900 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 15 23:25:29.917905 kernel: random: crng init done May 15 23:25:29.917912 kernel: secureboot: Secure boot disabled May 15 23:25:29.917918 kernel: ACPI: Early table checksum verification disabled May 15 23:25:29.917924 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 23:25:29.917931 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 23:25:29.917937 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917943 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917949 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917955 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917962 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917969 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917976 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917982 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917988 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:25:29.917994 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 23:25:29.918000 kernel: NUMA: Failed to initialise from firmware May 15 23:25:29.918006 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:25:29.918013 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] May 15 23:25:29.918019 kernel: Zone ranges: May 15 23:25:29.918024 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:25:29.918032 kernel: DMA32 empty May 15 23:25:29.918038 kernel: Normal empty May 15 23:25:29.918044 kernel: Movable zone start for each node May 15 23:25:29.918050 kernel: Early memory node ranges May 15 23:25:29.918056 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 15 23:25:29.918062 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 15 23:25:29.918068 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 15 23:25:29.918074 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 23:25:29.918080 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 23:25:29.918086 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 23:25:29.918092 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 23:25:29.918098 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 23:25:29.918105 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 23:25:29.918112 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:25:29.918118 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 23:25:29.918127 kernel: psci: probing for conduit method from ACPI. May 15 23:25:29.918134 kernel: psci: PSCIv1.1 detected in firmware. May 15 23:25:29.918140 kernel: psci: Using standard PSCI v0.2 function IDs May 15 23:25:29.918148 kernel: psci: Trusted OS migration not required May 15 23:25:29.918154 kernel: psci: SMC Calling Convention v1.1 May 15 23:25:29.918162 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 23:25:29.918169 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 23:25:29.918176 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 23:25:29.918182 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 23:25:29.918189 kernel: Detected PIPT I-cache on CPU0 May 15 23:25:29.918195 kernel: CPU features: detected: GIC system register CPU interface May 15 23:25:29.918202 kernel: CPU features: detected: Hardware dirty bit management May 15 23:25:29.918208 kernel: CPU features: detected: Spectre-v4 May 15 23:25:29.918224 kernel: CPU features: detected: Spectre-BHB May 15 23:25:29.918232 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 23:25:29.918239 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 23:25:29.918266 kernel: CPU features: detected: ARM erratum 1418040 May 15 23:25:29.918281 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 23:25:29.918287 kernel: alternatives: applying boot alternatives May 15 23:25:29.918295 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5842e6d9a9272dc71039ff31db7df13c5a397d9a9917b662574c24d437910f6a May 15 23:25:29.918302 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:25:29.918309 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:25:29.918315 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:25:29.918322 kernel: Fallback order for Node 0: 0 May 15 23:25:29.918330 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 23:25:29.918336 kernel: Policy zone: DMA May 15 23:25:29.918343 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:25:29.918349 kernel: software IO TLB: area num 4. May 15 23:25:29.918376 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 23:25:29.918383 kernel: Memory: 2387352K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38464K init, 897K bss, 184936K reserved, 0K cma-reserved) May 15 23:25:29.918389 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:25:29.918396 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:25:29.918403 kernel: rcu: RCU event tracing is enabled. May 15 23:25:29.918409 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:25:29.918416 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:25:29.918422 kernel: Tracing variant of Tasks RCU enabled. May 15 23:25:29.918431 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:25:29.918437 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:25:29.918444 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 23:25:29.918450 kernel: GICv3: 256 SPIs implemented May 15 23:25:29.918457 kernel: GICv3: 0 Extended SPIs implemented May 15 23:25:29.918463 kernel: Root IRQ handler: gic_handle_irq May 15 23:25:29.918469 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 23:25:29.918476 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 23:25:29.918483 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 23:25:29.918489 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 23:25:29.918496 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 23:25:29.918504 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 23:25:29.918511 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 23:25:29.918518 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:25:29.918524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:25:29.918543 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 23:25:29.918550 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 23:25:29.918556 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 23:25:29.918564 kernel: arm-pv: using stolen time PV May 15 23:25:29.918571 kernel: Console: colour dummy device 80x25 May 15 23:25:29.918578 kernel: ACPI: Core revision 20230628 May 15 23:25:29.918585 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 23:25:29.918593 kernel: pid_max: default: 32768 minimum: 301 May 15 23:25:29.918600 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:25:29.918606 kernel: landlock: Up and running. May 15 23:25:29.918613 kernel: SELinux: Initializing. May 15 23:25:29.918620 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:25:29.918626 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:25:29.918633 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 23:25:29.918640 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:25:29.918647 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:25:29.918655 kernel: rcu: Hierarchical SRCU implementation. May 15 23:25:29.918662 kernel: rcu: Max phase no-delay instances is 400. May 15 23:25:29.918669 kernel: Platform MSI: ITS@0x8080000 domain created May 15 23:25:29.918676 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 23:25:29.918682 kernel: Remapping and enabling EFI services. May 15 23:25:29.918699 kernel: smp: Bringing up secondary CPUs ... May 15 23:25:29.918707 kernel: Detected PIPT I-cache on CPU1 May 15 23:25:29.918714 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 23:25:29.918720 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 23:25:29.918729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:25:29.918736 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 23:25:29.918748 kernel: Detected PIPT I-cache on CPU2 May 15 23:25:29.918756 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 23:25:29.918768 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 23:25:29.918775 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:25:29.918782 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 23:25:29.918789 kernel: Detected PIPT I-cache on CPU3 May 15 23:25:29.918796 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 23:25:29.918803 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 23:25:29.918811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:25:29.918818 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 23:25:29.918825 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:25:29.918832 kernel: SMP: Total of 4 processors activated. May 15 23:25:29.918839 kernel: CPU features: detected: 32-bit EL0 Support May 15 23:25:29.918846 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 23:25:29.918853 kernel: CPU features: detected: Common not Private translations May 15 23:25:29.918861 kernel: CPU features: detected: CRC32 instructions May 15 23:25:29.918868 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 23:25:29.918875 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 23:25:29.918882 kernel: CPU features: detected: LSE atomic instructions May 15 23:25:29.918892 kernel: CPU features: detected: Privileged Access Never May 15 23:25:29.918899 kernel: CPU features: detected: RAS Extension Support May 15 23:25:29.918905 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 23:25:29.918912 kernel: CPU: All CPU(s) started at EL1 May 15 23:25:29.918919 kernel: alternatives: applying system-wide alternatives May 15 23:25:29.918927 kernel: devtmpfs: initialized May 15 23:25:29.918935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:25:29.918942 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:25:29.918949 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:25:29.918955 kernel: SMBIOS 3.0.0 present. May 15 23:25:29.918962 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 23:25:29.918969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:25:29.918976 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 23:25:29.918983 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 23:25:29.918992 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 23:25:29.918999 kernel: audit: initializing netlink subsys (disabled) May 15 23:25:29.919006 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 15 23:25:29.919013 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:25:29.919020 kernel: cpuidle: using governor menu May 15 23:25:29.919027 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 23:25:29.919034 kernel: ASID allocator initialised with 32768 entries May 15 23:25:29.919041 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:25:29.919048 kernel: Serial: AMBA PL011 UART driver May 15 23:25:29.919056 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 23:25:29.919063 kernel: Modules: 0 pages in range for non-PLT usage May 15 23:25:29.919070 kernel: Modules: 509232 pages in range for PLT usage May 15 23:25:29.919077 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:25:29.919084 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:25:29.919091 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 23:25:29.919098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 23:25:29.919105 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:25:29.919112 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:25:29.919120 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 23:25:29.919128 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 23:25:29.919134 kernel: ACPI: Added _OSI(Module Device) May 15 23:25:29.919141 kernel: ACPI: Added _OSI(Processor Device) May 15 23:25:29.919148 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:25:29.919155 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:25:29.919162 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:25:29.919169 kernel: ACPI: Interpreter enabled May 15 23:25:29.919176 kernel: ACPI: Using GIC for interrupt routing May 15 23:25:29.919183 kernel: ACPI: MCFG table detected, 1 entries May 15 23:25:29.919191 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 23:25:29.919205 kernel: printk: console [ttyAMA0] enabled May 15 23:25:29.919219 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:25:29.919371 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:25:29.919450 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 23:25:29.919514 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 23:25:29.919579 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 23:25:29.919645 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 23:25:29.919654 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 23:25:29.919661 kernel: PCI host bridge to bus 0000:00 May 15 23:25:29.919746 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 23:25:29.919821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 23:25:29.919884 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 23:25:29.919942 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:25:29.920030 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 23:25:29.920112 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:25:29.920179 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 23:25:29.920256 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 23:25:29.920324 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:25:29.920390 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:25:29.920457 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 23:25:29.920538 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 23:25:29.920602 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 23:25:29.920663 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 23:25:29.920734 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 23:25:29.920744 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 23:25:29.920751 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 23:25:29.920758 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 23:25:29.920769 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 23:25:29.920776 kernel: iommu: Default domain type: Translated May 15 23:25:29.920783 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 23:25:29.920790 kernel: efivars: Registered efivars operations May 15 23:25:29.920797 kernel: vgaarb: loaded May 15 23:25:29.920804 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 23:25:29.920811 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:25:29.920818 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:25:29.920825 kernel: pnp: PnP ACPI init May 15 23:25:29.920901 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 23:25:29.920911 kernel: pnp: PnP ACPI: found 1 devices May 15 23:25:29.920919 kernel: NET: Registered PF_INET protocol family May 15 23:25:29.920926 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:25:29.920933 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:25:29.920940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:25:29.920948 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:25:29.920955 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:25:29.920964 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:25:29.920971 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:25:29.920978 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:25:29.920985 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:25:29.920992 kernel: PCI: CLS 0 bytes, default 64 May 15 23:25:29.920999 kernel: kvm [1]: HYP mode not available May 15 23:25:29.921006 kernel: Initialise system trusted keyrings May 15 23:25:29.921013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:25:29.921020 kernel: Key type asymmetric registered May 15 23:25:29.921028 kernel: Asymmetric key parser 'x509' registered May 15 23:25:29.921035 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 23:25:29.921043 kernel: io scheduler mq-deadline registered May 15 23:25:29.921049 kernel: io scheduler kyber registered May 15 23:25:29.921056 kernel: io scheduler bfq registered May 15 23:25:29.921063 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 23:25:29.921070 kernel: ACPI: button: Power Button [PWRB] May 15 23:25:29.921078 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 23:25:29.921147 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 23:25:29.921172 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:25:29.921179 kernel: thunder_xcv, ver 1.0 May 15 23:25:29.921187 kernel: thunder_bgx, ver 1.0 May 15 23:25:29.921194 kernel: nicpf, ver 1.0 May 15 23:25:29.921201 kernel: nicvf, ver 1.0 May 15 23:25:29.921283 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 23:25:29.921350 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T23:25:29 UTC (1747351529) May 15 23:25:29.921359 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 23:25:29.921369 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 23:25:29.921376 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 23:25:29.921387 kernel: watchdog: Hard watchdog permanently disabled May 15 23:25:29.921396 kernel: NET: Registered PF_INET6 protocol family May 15 23:25:29.921405 kernel: Segment Routing with IPv6 May 15 23:25:29.921414 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:25:29.921423 kernel: NET: Registered PF_PACKET protocol family May 15 23:25:29.921430 kernel: Key type dns_resolver registered May 15 23:25:29.921437 kernel: registered taskstats version 1 May 15 23:25:29.921445 kernel: Loading compiled-in X.509 certificates May 15 23:25:29.921454 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 06f4063ae17661ba03d0a772a07398655eacda2e' May 15 23:25:29.921461 kernel: Key type .fscrypt registered May 15 23:25:29.921468 kernel: Key type fscrypt-provisioning registered May 15 23:25:29.921475 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:25:29.921482 kernel: ima: Allocated hash algorithm: sha1 May 15 23:25:29.921489 kernel: ima: No architecture policies found May 15 23:25:29.921496 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 23:25:29.921503 kernel: clk: Disabling unused clocks May 15 23:25:29.921511 kernel: Freeing unused kernel memory: 38464K May 15 23:25:29.921518 kernel: Run /init as init process May 15 23:25:29.921525 kernel: with arguments: May 15 23:25:29.921531 kernel: /init May 15 23:25:29.921538 kernel: with environment: May 15 23:25:29.921545 kernel: HOME=/ May 15 23:25:29.921552 kernel: TERM=linux May 15 23:25:29.921558 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:25:29.921570 systemd[1]: Successfully made /usr/ read-only. May 15 23:25:29.921587 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:25:29.921595 systemd[1]: Detected virtualization kvm. May 15 23:25:29.921602 systemd[1]: Detected architecture arm64. May 15 23:25:29.921609 systemd[1]: Running in initrd. May 15 23:25:29.921616 systemd[1]: No hostname configured, using default hostname. May 15 23:25:29.921624 systemd[1]: Hostname set to . May 15 23:25:29.921631 systemd[1]: Initializing machine ID from VM UUID. May 15 23:25:29.921640 systemd[1]: Queued start job for default target initrd.target. May 15 23:25:29.921647 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:25:29.921655 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:25:29.921663 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:25:29.921671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:25:29.921678 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:25:29.921704 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:25:29.921716 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:25:29.921724 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:25:29.921731 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:25:29.921739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:25:29.921746 systemd[1]: Reached target paths.target - Path Units. May 15 23:25:29.921754 systemd[1]: Reached target slices.target - Slice Units. May 15 23:25:29.921762 systemd[1]: Reached target swap.target - Swaps. May 15 23:25:29.921769 systemd[1]: Reached target timers.target - Timer Units. May 15 23:25:29.921778 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:25:29.921786 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:25:29.921794 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:25:29.921801 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 15 23:25:29.921809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:25:29.921816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:25:29.921823 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:25:29.921831 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:25:29.921838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:25:29.921847 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:25:29.921855 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:25:29.921863 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:25:29.921870 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:25:29.921878 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:25:29.921885 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:25:29.921892 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:25:29.921900 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:25:29.921909 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:25:29.921917 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:25:29.921944 systemd-journald[237]: Collecting audit messages is disabled. May 15 23:25:29.921966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:25:29.921974 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:25:29.921986 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:25:29.921993 kernel: Bridge firewalling registered May 15 23:25:29.922001 systemd-journald[237]: Journal started May 15 23:25:29.922021 systemd-journald[237]: Runtime Journal (/run/log/journal/c86b7c5c8aaf4c28a1a9425121a46356) is 5.9M, max 47.3M, 41.4M free. May 15 23:25:29.906008 systemd-modules-load[238]: Inserted module 'overlay' May 15 23:25:29.925189 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:25:29.922957 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 23:25:29.926349 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:25:29.928038 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:25:29.931548 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:25:29.933841 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:25:29.946481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:25:29.948258 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:25:29.954390 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:25:29.957012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:25:29.959776 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:25:29.964827 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:25:29.973513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:25:29.982556 dracut-cmdline[278]: dracut-dracut-053 May 15 23:25:29.985729 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5842e6d9a9272dc71039ff31db7df13c5a397d9a9917b662574c24d437910f6a May 15 23:25:30.013800 systemd-resolved[281]: Positive Trust Anchors: May 15 23:25:30.013815 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:25:30.013846 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:25:30.018678 systemd-resolved[281]: Defaulting to hostname 'linux'. May 15 23:25:30.019647 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:25:30.024005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:25:30.060729 kernel: SCSI subsystem initialized May 15 23:25:30.064707 kernel: Loading iSCSI transport class v2.0-870. May 15 23:25:30.072725 kernel: iscsi: registered transport (tcp) May 15 23:25:30.087720 kernel: iscsi: registered transport (qla4xxx) May 15 23:25:30.087753 kernel: QLogic iSCSI HBA Driver May 15 23:25:30.129205 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:25:30.131514 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:25:30.163739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:25:30.163786 kernel: device-mapper: uevent: version 1.0.3 May 15 23:25:30.165715 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:25:30.211719 kernel: raid6: neonx8 gen() 15771 MB/s May 15 23:25:30.228712 kernel: raid6: neonx4 gen() 15776 MB/s May 15 23:25:30.245711 kernel: raid6: neonx2 gen() 13271 MB/s May 15 23:25:30.262713 kernel: raid6: neonx1 gen() 10504 MB/s May 15 23:25:30.279706 kernel: raid6: int64x8 gen() 6770 MB/s May 15 23:25:30.296711 kernel: raid6: int64x4 gen() 7338 MB/s May 15 23:25:30.313715 kernel: raid6: int64x2 gen() 6101 MB/s May 15 23:25:30.330934 kernel: raid6: int64x1 gen() 5041 MB/s May 15 23:25:30.330964 kernel: raid6: using algorithm neonx4 gen() 15776 MB/s May 15 23:25:30.348901 kernel: raid6: .... xor() 12389 MB/s, rmw enabled May 15 23:25:30.348928 kernel: raid6: using neon recovery algorithm May 15 23:25:30.353710 kernel: xor: measuring software checksum speed May 15 23:25:30.355084 kernel: 8regs : 18821 MB/sec May 15 23:25:30.355102 kernel: 32regs : 21590 MB/sec May 15 23:25:30.355765 kernel: arm64_neon : 26390 MB/sec May 15 23:25:30.355779 kernel: xor: using function: arm64_neon (26390 MB/sec) May 15 23:25:30.407720 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:25:30.419485 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:25:30.422184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:25:30.444753 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 15 23:25:30.448526 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:25:30.451652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:25:30.477244 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 15 23:25:30.505163 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:25:30.508855 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:25:30.562288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:25:30.564829 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:25:30.585924 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:25:30.587585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:25:30.589721 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:25:30.592010 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:25:30.594791 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:25:30.609721 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 23:25:30.615272 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:25:30.625736 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:25:30.626453 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:25:30.631349 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:25:30.631373 kernel: GPT:9289727 != 19775487 May 15 23:25:30.631391 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:25:30.631404 kernel: GPT:9289727 != 19775487 May 15 23:25:30.631412 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:25:30.631421 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:25:30.626576 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:25:30.633496 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:25:30.634702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:25:30.634857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:25:30.639135 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:25:30.643937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:25:30.649199 kernel: BTRFS: device fsid 44e3c267-913e-4e36-8a01-ed9d3f105561 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520) May 15 23:25:30.652703 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (524) May 15 23:25:30.664392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:25:30.671732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:25:30.679707 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:25:30.690619 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:25:30.691862 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:25:30.700780 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:25:30.702899 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:25:30.704909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:25:30.721893 disk-uuid[550]: Primary Header is updated. May 15 23:25:30.721893 disk-uuid[550]: Secondary Entries is updated. May 15 23:25:30.721893 disk-uuid[550]: Secondary Header is updated. May 15 23:25:30.730261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:25:30.732411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:25:31.739722 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:25:31.740375 disk-uuid[555]: The operation has completed successfully. May 15 23:25:31.761900 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:25:31.762010 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:25:31.790751 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:25:31.805587 sh[572]: Success May 15 23:25:31.823716 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 23:25:31.855402 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:25:31.858160 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:25:31.875303 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:25:31.889715 kernel: BTRFS info (device dm-0): first mount of filesystem 44e3c267-913e-4e36-8a01-ed9d3f105561 May 15 23:25:31.889752 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 23:25:31.889763 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:25:31.892193 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:25:31.892209 kernel: BTRFS info (device dm-0): using free space tree May 15 23:25:31.896166 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:25:31.897550 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:25:31.898349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:25:31.901143 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:25:31.925714 kernel: BTRFS info (device vda6): first mount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:25:31.925749 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:25:31.925759 kernel: BTRFS info (device vda6): using free space tree May 15 23:25:31.928706 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:25:31.932745 kernel: BTRFS info (device vda6): last unmount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:25:31.935041 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:25:31.937564 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:25:31.995931 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:25:32.000842 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:25:32.041853 ignition[666]: Ignition 2.20.0 May 15 23:25:32.042019 systemd-networkd[754]: lo: Link UP May 15 23:25:32.041863 ignition[666]: Stage: fetch-offline May 15 23:25:32.042023 systemd-networkd[754]: lo: Gained carrier May 15 23:25:32.041890 ignition[666]: no configs at "/usr/lib/ignition/base.d" May 15 23:25:32.042844 systemd-networkd[754]: Enumeration completed May 15 23:25:32.041898 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:25:32.043240 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:25:32.042064 ignition[666]: parsed url from cmdline: "" May 15 23:25:32.043244 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:25:32.042067 ignition[666]: no config URL provided May 15 23:25:32.043253 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:25:32.042072 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:25:32.044044 systemd-networkd[754]: eth0: Link UP May 15 23:25:32.042080 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 15 23:25:32.044047 systemd-networkd[754]: eth0: Gained carrier May 15 23:25:32.042113 ignition[666]: op(1): [started] loading QEMU firmware config module May 15 23:25:32.044054 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:25:32.042121 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:25:32.045320 systemd[1]: Reached target network.target - Network. May 15 23:25:32.056845 ignition[666]: op(1): [finished] loading QEMU firmware config module May 15 23:25:32.070727 systemd-networkd[754]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:25:32.100113 ignition[666]: parsing config with SHA512: 5e02ac460705c609b616907f3d1991b9a6017fea13ffe9688138271d8512e73a540d81576b3cdf0df46a04e62f34f223fc56d6af074e62337709598b1fa734f8 May 15 23:25:32.107978 unknown[666]: fetched base config from "system" May 15 23:25:32.107987 unknown[666]: fetched user config from "qemu" May 15 23:25:32.108459 ignition[666]: fetch-offline: fetch-offline passed May 15 23:25:32.108532 ignition[666]: Ignition finished successfully May 15 23:25:32.111968 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:25:32.113589 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:25:32.114550 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:25:32.140813 ignition[768]: Ignition 2.20.0 May 15 23:25:32.140822 ignition[768]: Stage: kargs May 15 23:25:32.140971 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 15 23:25:32.140980 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:25:32.141887 ignition[768]: kargs: kargs passed May 15 23:25:32.144627 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:25:32.141930 ignition[768]: Ignition finished successfully May 15 23:25:32.146641 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:25:32.165748 ignition[777]: Ignition 2.20.0 May 15 23:25:32.165758 ignition[777]: Stage: disks May 15 23:25:32.165918 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 15 23:25:32.167977 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:25:32.165928 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:25:32.169361 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:25:32.166799 ignition[777]: disks: disks passed May 15 23:25:32.170765 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:25:32.166842 ignition[777]: Ignition finished successfully May 15 23:25:32.172708 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:25:32.174594 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:25:32.176441 systemd[1]: Reached target basic.target - Basic System. May 15 23:25:32.178573 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:25:32.202332 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:25:32.206151 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:25:32.208372 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:25:32.268706 kernel: EXT4-fs (vda9): mounted filesystem 4099475e-0c33-48d1-8a7f-66c442027985 r/w with ordered data mode. Quota mode: none. May 15 23:25:32.269251 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:25:32.270597 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:25:32.272891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:25:32.274539 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:25:32.275584 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:25:32.275640 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:25:32.275665 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:25:32.285535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:25:32.288725 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:25:32.290925 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) May 15 23:25:32.293968 kernel: BTRFS info (device vda6): first mount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:25:32.293996 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:25:32.294007 kernel: BTRFS info (device vda6): using free space tree May 15 23:25:32.296703 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:25:32.298008 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:25:32.335061 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:25:32.338897 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory May 15 23:25:32.342679 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:25:32.346483 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:25:32.419036 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:25:32.421797 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:25:32.423349 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:25:32.436705 kernel: BTRFS info (device vda6): last unmount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:25:32.451883 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:25:32.462010 ignition[911]: INFO : Ignition 2.20.0 May 15 23:25:32.462010 ignition[911]: INFO : Stage: mount May 15 23:25:32.464476 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:25:32.464476 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:25:32.464476 ignition[911]: INFO : mount: mount passed May 15 23:25:32.464476 ignition[911]: INFO : Ignition finished successfully May 15 23:25:32.465322 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:25:32.468806 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:25:32.896936 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:25:32.898377 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:25:32.922245 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924) May 15 23:25:32.925287 kernel: BTRFS info (device vda6): first mount of filesystem 17843e2b-3b85-462c-ad3f-d3e62fd4c5a5 May 15 23:25:32.925329 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:25:32.925344 kernel: BTRFS info (device vda6): using free space tree May 15 23:25:32.928770 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:25:32.929351 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:25:32.955357 ignition[941]: INFO : Ignition 2.20.0 May 15 23:25:32.955357 ignition[941]: INFO : Stage: files May 15 23:25:32.956925 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:25:32.956925 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:25:32.956925 ignition[941]: DEBUG : files: compiled without relabeling support, skipping May 15 23:25:32.962589 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:25:32.962589 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:25:32.962589 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:25:32.962589 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:25:32.962589 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:25:32.962540 unknown[941]: wrote ssh authorized keys file for user: core May 15 23:25:32.970530 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 23:25:32.970530 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 23:25:33.140140 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 23:25:33.387815 systemd-networkd[754]: eth0: Gained IPv6LL May 15 23:25:33.619483 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 23:25:33.621489 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:25:33.621489 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 23:25:33.885804 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:25:33.958159 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:25:33.960083 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 15 23:25:34.615337 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 23:25:34.919123 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:25:34.919123 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 23:25:34.922965 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:25:34.939239 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:25:34.942094 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:25:34.944682 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:25:34.944682 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 23:25:34.944682 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:25:34.944682 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:25:34.944682 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:25:34.944682 ignition[941]: INFO : files: files passed May 15 23:25:34.944682 ignition[941]: INFO : Ignition finished successfully May 15 23:25:34.947376 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:25:34.950262 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:25:34.952299 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:25:34.967564 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:25:34.967648 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:25:34.971742 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:25:34.973097 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:25:34.973097 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:25:34.977131 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:25:34.973178 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:25:34.976149 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:25:34.978798 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:25:35.025472 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:25:35.025574 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:25:35.027842 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:25:35.028851 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:25:35.030838 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:25:35.031537 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:25:35.053594 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:25:35.055911 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:25:35.074805 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:25:35.076000 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:25:35.077990 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:25:35.079714 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:25:35.079827 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:25:35.082268 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:25:35.083322 systemd[1]: Stopped target basic.target - Basic System. May 15 23:25:35.085112 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:25:35.086886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:25:35.088637 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:25:35.090582 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:25:35.092587 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:25:35.094632 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:25:35.096414 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:25:35.098343 systemd[1]: Stopped target swap.target - Swaps. May 15 23:25:35.099864 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:25:35.099982 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:25:35.102297 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:25:35.104185 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:25:35.106139 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:25:35.107764 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:25:35.109241 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:25:35.109357 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:25:35.112108 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:25:35.112233 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:25:35.114169 systemd[1]: Stopped target paths.target - Path Units. May 15 23:25:35.115744 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:25:35.116749 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:25:35.118864 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:25:35.120367 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:25:35.122058 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:25:35.122144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:25:35.124176 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:25:35.124267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:25:35.125770 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:25:35.125880 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:25:35.127635 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:25:35.127751 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:25:35.130008 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:25:35.131536 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:25:35.131667 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:25:35.145157 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:25:35.146005 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:25:35.146128 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:25:35.148059 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:25:35.148158 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:25:35.154608 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:25:35.154710 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:25:35.158126 ignition[998]: INFO : Ignition 2.20.0 May 15 23:25:35.158126 ignition[998]: INFO : Stage: umount May 15 23:25:35.158126 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:25:35.158126 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:25:35.162104 ignition[998]: INFO : umount: umount passed May 15 23:25:35.162104 ignition[998]: INFO : Ignition finished successfully May 15 23:25:35.159526 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:25:35.160929 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:25:35.161046 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:25:35.163986 systemd[1]: Stopped target network.target - Network. May 15 23:25:35.164905 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:25:35.164968 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:25:35.166500 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:25:35.166548 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:25:35.168207 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:25:35.168264 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:25:35.169889 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:25:35.169931 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:25:35.171855 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:25:35.173659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:25:35.181235 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:25:35.181349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:25:35.186416 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 15 23:25:35.186644 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:25:35.186754 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:25:35.192374 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 15 23:25:35.192981 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:25:35.193037 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:25:35.195571 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:25:35.196721 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:25:35.196779 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:25:35.198711 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:25:35.198757 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:25:35.201574 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:25:35.201617 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:25:35.203572 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:25:35.203616 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:25:35.206600 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:25:35.210732 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 23:25:35.210792 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 15 23:25:35.224880 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:25:35.225052 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:25:35.227587 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:25:35.227675 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:25:35.229811 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:25:35.230805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:25:35.232197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:25:35.232275 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:25:35.233920 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:25:35.233970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:25:35.236551 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:25:35.236597 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:25:35.239120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:25:35.239164 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:25:35.241873 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:25:35.243023 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:25:35.243075 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:25:35.245777 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:25:35.245817 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:25:35.249710 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 23:25:35.249764 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 15 23:25:35.254868 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:25:35.254960 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:25:35.257000 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:25:35.257079 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:25:35.258960 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:25:35.259042 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:25:35.260585 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:25:35.262561 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:25:35.270976 systemd[1]: Switching root. May 15 23:25:35.300554 systemd-journald[237]: Journal stopped May 15 23:25:36.037395 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 15 23:25:36.037456 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:25:36.037469 kernel: SELinux: policy capability open_perms=1 May 15 23:25:36.037485 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:25:36.037494 kernel: SELinux: policy capability always_check_network=0 May 15 23:25:36.037504 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:25:36.037514 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:25:36.037525 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:25:36.037537 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:25:36.037546 kernel: audit: type=1403 audit(1747351535.458:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:25:36.037556 systemd[1]: Successfully loaded SELinux policy in 31.139ms. May 15 23:25:36.037580 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.345ms. May 15 23:25:36.037593 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 15 23:25:36.037604 systemd[1]: Detected virtualization kvm. May 15 23:25:36.037614 systemd[1]: Detected architecture arm64. May 15 23:25:36.037624 systemd[1]: Detected first boot. May 15 23:25:36.037634 systemd[1]: Initializing machine ID from VM UUID. May 15 23:25:36.037644 zram_generator::config[1046]: No configuration found. May 15 23:25:36.037657 kernel: NET: Registered PF_VSOCK protocol family May 15 23:25:36.037666 systemd[1]: Populated /etc with preset unit settings. May 15 23:25:36.037679 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 15 23:25:36.037769 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 23:25:36.037782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 23:25:36.037792 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 23:25:36.037804 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:25:36.037815 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:25:36.037825 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:25:36.037834 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:25:36.037844 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:25:36.037857 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:25:36.037868 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:25:36.037878 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:25:36.037888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:25:36.037899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:25:36.037909 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:25:36.037919 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:25:36.037929 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:25:36.037939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:25:36.037951 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 23:25:36.037961 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:25:36.037972 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 23:25:36.037984 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 23:25:36.037994 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 23:25:36.038004 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:25:36.038014 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:25:36.038026 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:25:36.038036 systemd[1]: Reached target slices.target - Slice Units. May 15 23:25:36.038047 systemd[1]: Reached target swap.target - Swaps. May 15 23:25:36.038057 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:25:36.038067 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:25:36.038078 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 15 23:25:36.038088 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:25:36.038098 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:25:36.038109 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:25:36.038119 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:25:36.038131 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:25:36.038141 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:25:36.038151 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:25:36.038161 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:25:36.038171 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:25:36.038181 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:25:36.038193 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:25:36.038203 systemd[1]: Reached target machines.target - Containers. May 15 23:25:36.038215 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:25:36.038232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:25:36.038243 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:25:36.038254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:25:36.038264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:25:36.038274 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:25:36.038284 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:25:36.038295 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:25:36.038305 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:25:36.038317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:25:36.038327 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 23:25:36.038337 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 23:25:36.038347 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 23:25:36.038357 kernel: fuse: init (API version 7.39) May 15 23:25:36.038366 systemd[1]: Stopped systemd-fsck-usr.service. May 15 23:25:36.038378 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:25:36.038388 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:25:36.038400 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:25:36.038410 kernel: loop: module loaded May 15 23:25:36.038420 kernel: ACPI: bus type drm_connector registered May 15 23:25:36.038429 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:25:36.038440 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:25:36.038470 systemd-journald[1121]: Collecting audit messages is disabled. May 15 23:25:36.038493 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 15 23:25:36.038503 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:25:36.038515 systemd-journald[1121]: Journal started May 15 23:25:36.038536 systemd-journald[1121]: Runtime Journal (/run/log/journal/c86b7c5c8aaf4c28a1a9425121a46356) is 5.9M, max 47.3M, 41.4M free. May 15 23:25:35.839513 systemd[1]: Queued start job for default target multi-user.target. May 15 23:25:35.850575 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:25:35.850949 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 23:25:36.040713 systemd[1]: verity-setup.service: Deactivated successfully. May 15 23:25:36.040752 systemd[1]: Stopped verity-setup.service. May 15 23:25:36.046412 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:25:36.047063 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:25:36.048290 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:25:36.049564 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:25:36.050727 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:25:36.051896 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:25:36.053094 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:25:36.055706 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:25:36.057130 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:25:36.060061 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:25:36.060254 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:25:36.061682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:25:36.061887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:25:36.065041 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:25:36.065235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:25:36.066545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:25:36.066713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:25:36.068170 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:25:36.068359 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:25:36.069732 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:25:36.069908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:25:36.071538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:25:36.074053 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:25:36.075620 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:25:36.077144 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 15 23:25:36.090342 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:25:36.092817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:25:36.094788 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:25:36.096057 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:25:36.096096 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:25:36.098052 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 15 23:25:36.104578 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:25:36.106680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:25:36.107855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:25:36.109072 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:25:36.111366 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:25:36.112647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:25:36.114853 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:25:36.117233 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:25:36.121440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:25:36.124598 systemd-journald[1121]: Time spent on flushing to /var/log/journal/c86b7c5c8aaf4c28a1a9425121a46356 is 30.095ms for 871 entries. May 15 23:25:36.124598 systemd-journald[1121]: System Journal (/var/log/journal/c86b7c5c8aaf4c28a1a9425121a46356) is 8M, max 195.6M, 187.6M free. May 15 23:25:36.168834 systemd-journald[1121]: Received client request to flush runtime journal. May 15 23:25:36.168884 kernel: loop0: detected capacity change from 0 to 203944 May 15 23:25:36.168912 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:25:36.127418 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:25:36.132567 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:25:36.139746 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:25:36.142235 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:25:36.143812 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:25:36.145432 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:25:36.147142 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:25:36.148779 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:25:36.158993 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:25:36.163884 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 15 23:25:36.170294 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:25:36.173142 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:25:36.185384 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:25:36.186080 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:25:36.187533 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 15 23:25:36.191992 kernel: loop1: detected capacity change from 0 to 103832 May 15 23:25:36.193937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:25:36.195755 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 23:25:36.220336 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 15 23:25:36.220354 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. May 15 23:25:36.223749 kernel: loop2: detected capacity change from 0 to 126448 May 15 23:25:36.224929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:25:36.262712 kernel: loop3: detected capacity change from 0 to 203944 May 15 23:25:36.268726 kernel: loop4: detected capacity change from 0 to 103832 May 15 23:25:36.273717 kernel: loop5: detected capacity change from 0 to 126448 May 15 23:25:36.277282 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:25:36.277651 (sd-merge)[1190]: Merged extensions into '/usr'. May 15 23:25:36.286899 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:25:36.286920 systemd[1]: Reloading... May 15 23:25:36.345708 zram_generator::config[1221]: No configuration found. May 15 23:25:36.365358 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:25:36.439683 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:25:36.490149 systemd[1]: Reloading finished in 202 ms. May 15 23:25:36.515263 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:25:36.518730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:25:36.529906 systemd[1]: Starting ensure-sysext.service... May 15 23:25:36.531699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:25:36.556995 systemd[1]: Reload requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... May 15 23:25:36.557010 systemd[1]: Reloading... May 15 23:25:36.557304 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:25:36.557517 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:25:36.558208 systemd-tmpfiles[1253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:25:36.558443 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 15 23:25:36.558499 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. May 15 23:25:36.560856 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:25:36.560867 systemd-tmpfiles[1253]: Skipping /boot May 15 23:25:36.569841 systemd-tmpfiles[1253]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:25:36.569851 systemd-tmpfiles[1253]: Skipping /boot May 15 23:25:36.598721 zram_generator::config[1285]: No configuration found. May 15 23:25:36.681432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:25:36.731107 systemd[1]: Reloading finished in 173 ms. May 15 23:25:36.741169 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:25:36.752669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:25:36.760290 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:25:36.762681 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:25:36.777474 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:25:36.782919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:25:36.785858 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:25:36.789461 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:25:36.792917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:25:36.798441 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:25:36.805762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:25:36.810911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:25:36.812142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:25:36.812268 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:25:36.813245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:25:36.813435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:25:36.815196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:25:36.815349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:25:36.827551 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:25:36.829302 systemd-udevd[1327]: Using default interface naming scheme 'v255'. May 15 23:25:36.829612 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:25:36.829788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:25:36.836839 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:25:36.841366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:25:36.844919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:25:36.859529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:25:36.863898 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:25:36.865935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:25:36.867047 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:25:36.867168 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 15 23:25:36.868457 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:25:36.872054 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:25:36.876453 augenrules[1374]: No rules May 15 23:25:36.882023 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:25:36.883882 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:25:36.885796 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:25:36.887869 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:25:36.889767 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:25:36.889922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:25:36.891507 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:25:36.891648 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:25:36.893294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:25:36.893483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:25:36.895400 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:25:36.895539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:25:36.898797 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:25:36.904717 systemd[1]: Finished ensure-sysext.service. May 15 23:25:36.916665 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 23:25:36.921860 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:25:36.923005 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:25:36.923078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:25:36.927408 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:25:36.928777 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:25:36.928976 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:25:36.946755 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1361) May 15 23:25:36.985572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:25:36.989895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:25:37.029060 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:25:37.040933 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:25:37.042275 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:25:37.054853 systemd-resolved[1322]: Positive Trust Anchors: May 15 23:25:37.054869 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:25:37.054901 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:25:37.066975 systemd-networkd[1389]: lo: Link UP May 15 23:25:37.066982 systemd-networkd[1389]: lo: Gained carrier May 15 23:25:37.067934 systemd-networkd[1389]: Enumeration completed May 15 23:25:37.068034 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:25:37.068561 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:25:37.068571 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:25:37.068997 systemd-networkd[1389]: eth0: Link UP May 15 23:25:37.069006 systemd-networkd[1389]: eth0: Gained carrier May 15 23:25:37.069018 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:25:37.070464 systemd-resolved[1322]: Defaulting to hostname 'linux'. May 15 23:25:37.070555 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 15 23:25:37.077833 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:25:37.079406 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:25:37.080973 systemd[1]: Reached target network.target - Network. May 15 23:25:37.081964 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:25:37.087776 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.41/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:25:37.091110 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. May 15 23:25:37.093111 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:25:37.096116 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:25:37.096219 systemd-timesyncd[1394]: Initial clock synchronization to Thu 2025-05-15 23:25:36.907690 UTC. May 15 23:25:37.102720 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 15 23:25:37.104361 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:25:37.108504 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:25:37.141706 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:25:37.150307 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:25:37.177140 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:25:37.178573 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:25:37.180892 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:25:37.182021 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:25:37.183268 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:25:37.184646 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:25:37.185816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:25:37.187020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:25:37.188258 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:25:37.188290 systemd[1]: Reached target paths.target - Path Units. May 15 23:25:37.189167 systemd[1]: Reached target timers.target - Timer Units. May 15 23:25:37.191068 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:25:37.193334 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:25:37.196576 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 15 23:25:37.198082 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 15 23:25:37.199333 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 15 23:25:37.204726 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:25:37.206184 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 15 23:25:37.208475 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:25:37.210151 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:25:37.211306 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:25:37.212250 systemd[1]: Reached target basic.target - Basic System. May 15 23:25:37.213216 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:25:37.213255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:25:37.214179 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:25:37.216036 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:25:37.216821 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:25:37.233156 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:25:37.235210 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:25:37.236247 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:25:37.239167 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:25:37.242410 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:25:37.242739 jq[1428]: false May 15 23:25:37.244474 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:25:37.247031 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:25:37.255254 extend-filesystems[1429]: Found loop3 May 15 23:25:37.257388 extend-filesystems[1429]: Found loop4 May 15 23:25:37.257388 extend-filesystems[1429]: Found loop5 May 15 23:25:37.257388 extend-filesystems[1429]: Found vda May 15 23:25:37.257388 extend-filesystems[1429]: Found vda1 May 15 23:25:37.257388 extend-filesystems[1429]: Found vda2 May 15 23:25:37.257388 extend-filesystems[1429]: Found vda3 May 15 23:25:37.257388 extend-filesystems[1429]: Found usr May 15 23:25:37.257388 extend-filesystems[1429]: Found vda4 May 15 23:25:37.257388 extend-filesystems[1429]: Found vda6 May 15 23:25:37.257388 extend-filesystems[1429]: Found vda7 May 15 23:25:37.257388 extend-filesystems[1429]: Found vda9 May 15 23:25:37.257388 extend-filesystems[1429]: Checking size of /dev/vda9 May 15 23:25:37.255465 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:25:37.264942 dbus-daemon[1427]: [system] SELinux support is enabled May 15 23:25:37.294021 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1373) May 15 23:25:37.294047 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:25:37.294064 extend-filesystems[1429]: Resized partition /dev/vda9 May 15 23:25:37.258902 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:25:37.300046 extend-filesystems[1449]: resize2fs 1.47.2 (1-Jan-2025) May 15 23:25:37.259358 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:25:37.304955 jq[1447]: true May 15 23:25:37.259924 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:25:37.265780 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:25:37.267603 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:25:37.272733 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:25:37.276624 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:25:37.276813 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:25:37.277055 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:25:37.277203 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:25:37.292029 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:25:37.292223 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:25:37.302569 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:25:37.315438 jq[1454]: true May 15 23:25:37.325705 tar[1452]: linux-arm64/helm May 15 23:25:37.329448 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:25:37.329481 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:25:37.331080 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:25:37.331109 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:25:37.334795 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (Power Button) May 15 23:25:37.336018 systemd-logind[1440]: New seat seat0. May 15 23:25:37.336906 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:25:37.349724 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:25:37.352844 update_engine[1444]: I20250515 23:25:37.352684 1444 main.cc:92] Flatcar Update Engine starting May 15 23:25:37.370986 update_engine[1444]: I20250515 23:25:37.355854 1444 update_check_scheduler.cc:74] Next update check in 7m53s May 15 23:25:37.354607 systemd[1]: Started update-engine.service - Update Engine. May 15 23:25:37.359190 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:25:37.371591 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:25:37.371591 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:25:37.371591 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:25:37.385115 extend-filesystems[1429]: Resized filesystem in /dev/vda9 May 15 23:25:37.375983 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:25:37.376203 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:25:37.407620 bash[1481]: Updated "/home/core/.ssh/authorized_keys" May 15 23:25:37.410157 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:25:37.418204 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:25:37.433931 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:25:37.528498 containerd[1456]: time="2025-05-15T23:25:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 15 23:25:37.530445 containerd[1456]: time="2025-05-15T23:25:37.530407720Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 15 23:25:37.539941 containerd[1456]: time="2025-05-15T23:25:37.539893960Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.96µs" May 15 23:25:37.539941 containerd[1456]: time="2025-05-15T23:25:37.539938920Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 15 23:25:37.540013 containerd[1456]: time="2025-05-15T23:25:37.539963640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540121880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540147680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540178160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540242000Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540261920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540582160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540599680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540616240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 15 23:25:37.540718 containerd[1456]: time="2025-05-15T23:25:37.540630160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 15 23:25:37.540879 containerd[1456]: time="2025-05-15T23:25:37.540747560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 15 23:25:37.541096 containerd[1456]: time="2025-05-15T23:25:37.541073280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 23:25:37.541125 containerd[1456]: time="2025-05-15T23:25:37.541113800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 15 23:25:37.541145 containerd[1456]: time="2025-05-15T23:25:37.541125960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 15 23:25:37.541162 containerd[1456]: time="2025-05-15T23:25:37.541156000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 15 23:25:37.541399 containerd[1456]: time="2025-05-15T23:25:37.541379480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 15 23:25:37.541466 containerd[1456]: time="2025-05-15T23:25:37.541448480Z" level=info msg="metadata content store policy set" policy=shared May 15 23:25:37.569439 containerd[1456]: time="2025-05-15T23:25:37.569404640Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 15 23:25:37.569485 containerd[1456]: time="2025-05-15T23:25:37.569461240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 15 23:25:37.569522 containerd[1456]: time="2025-05-15T23:25:37.569485800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 15 23:25:37.569522 containerd[1456]: time="2025-05-15T23:25:37.569499040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 15 23:25:37.569522 containerd[1456]: time="2025-05-15T23:25:37.569510720Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 15 23:25:37.569570 containerd[1456]: time="2025-05-15T23:25:37.569524080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 15 23:25:37.569570 containerd[1456]: time="2025-05-15T23:25:37.569536920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 15 23:25:37.569570 containerd[1456]: time="2025-05-15T23:25:37.569549680Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 15 23:25:37.569570 containerd[1456]: time="2025-05-15T23:25:37.569559920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 15 23:25:37.569635 containerd[1456]: time="2025-05-15T23:25:37.569571080Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 15 23:25:37.569635 containerd[1456]: time="2025-05-15T23:25:37.569580840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 15 23:25:37.569635 containerd[1456]: time="2025-05-15T23:25:37.569592560Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569746840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569774680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569788040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569801200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569812120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569823120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569834640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569845000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569855920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569867160Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 15 23:25:37.570347 containerd[1456]: time="2025-05-15T23:25:37.569877320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 15 23:25:37.571946 containerd[1456]: time="2025-05-15T23:25:37.571925320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 15 23:25:37.571997 containerd[1456]: time="2025-05-15T23:25:37.571950000Z" level=info msg="Start snapshots syncer" May 15 23:25:37.571997 containerd[1456]: time="2025-05-15T23:25:37.571989080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 15 23:25:37.572290 containerd[1456]: time="2025-05-15T23:25:37.572245160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 15 23:25:37.572535 containerd[1456]: time="2025-05-15T23:25:37.572302000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 15 23:25:37.572535 containerd[1456]: time="2025-05-15T23:25:37.572372040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 15 23:25:37.572535 containerd[1456]: time="2025-05-15T23:25:37.572482720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 15 23:25:37.572535 containerd[1456]: time="2025-05-15T23:25:37.572505280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 15 23:25:37.572535 containerd[1456]: time="2025-05-15T23:25:37.572523880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 15 23:25:37.572535 containerd[1456]: time="2025-05-15T23:25:37.572535440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572548080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572558680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572569080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572596880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572614240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572624720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 15 23:25:37.572665 containerd[1456]: time="2025-05-15T23:25:37.572658560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 23:25:37.572801 containerd[1456]: time="2025-05-15T23:25:37.572671640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 15 23:25:37.572801 containerd[1456]: time="2025-05-15T23:25:37.572680720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 23:25:37.572801 containerd[1456]: time="2025-05-15T23:25:37.572712720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 15 23:25:37.572801 containerd[1456]: time="2025-05-15T23:25:37.572721720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 15 23:25:37.572801 containerd[1456]: time="2025-05-15T23:25:37.572733440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 15 23:25:37.572801 containerd[1456]: time="2025-05-15T23:25:37.572744440Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 15 23:25:37.572898 containerd[1456]: time="2025-05-15T23:25:37.572822120Z" level=info msg="runtime interface created" May 15 23:25:37.572898 containerd[1456]: time="2025-05-15T23:25:37.572827720Z" level=info msg="created NRI interface" May 15 23:25:37.572898 containerd[1456]: time="2025-05-15T23:25:37.572836160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 15 23:25:37.572898 containerd[1456]: time="2025-05-15T23:25:37.572847800Z" level=info msg="Connect containerd service" May 15 23:25:37.572898 containerd[1456]: time="2025-05-15T23:25:37.572874760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:25:37.573535 containerd[1456]: time="2025-05-15T23:25:37.573506560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:25:37.684026 containerd[1456]: time="2025-05-15T23:25:37.683965840Z" level=info msg="Start subscribing containerd event" May 15 23:25:37.684254 containerd[1456]: time="2025-05-15T23:25:37.684137240Z" level=info msg="Start recovering state" May 15 23:25:37.684327 containerd[1456]: time="2025-05-15T23:25:37.684290480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:25:37.684356 containerd[1456]: time="2025-05-15T23:25:37.684348680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:25:37.686417 tar[1452]: linux-arm64/LICENSE May 15 23:25:37.686506 tar[1452]: linux-arm64/README.md May 15 23:25:37.686553 containerd[1456]: time="2025-05-15T23:25:37.686493960Z" level=info msg="Start event monitor" May 15 23:25:37.686633 containerd[1456]: time="2025-05-15T23:25:37.686620080Z" level=info msg="Start cni network conf syncer for default" May 15 23:25:37.686683 containerd[1456]: time="2025-05-15T23:25:37.686670400Z" level=info msg="Start streaming server" May 15 23:25:37.686758 containerd[1456]: time="2025-05-15T23:25:37.686744840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 15 23:25:37.686805 containerd[1456]: time="2025-05-15T23:25:37.686793280Z" level=info msg="runtime interface starting up..." May 15 23:25:37.686883 containerd[1456]: time="2025-05-15T23:25:37.686866680Z" level=info msg="starting plugins..." May 15 23:25:37.686952 containerd[1456]: time="2025-05-15T23:25:37.686938840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 15 23:25:37.687137 containerd[1456]: time="2025-05-15T23:25:37.687115160Z" level=info msg="containerd successfully booted in 0.159023s" May 15 23:25:37.687356 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:25:37.712185 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:25:37.818916 sshd_keygen[1450]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:25:37.837805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:25:37.840547 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:25:37.867946 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:25:37.868774 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:25:37.871719 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:25:37.893789 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:25:37.896551 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:25:37.898648 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 23:25:37.900008 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:25:38.187813 systemd-networkd[1389]: eth0: Gained IPv6LL May 15 23:25:38.191731 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:25:38.193553 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:25:38.196700 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:25:38.199330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:25:38.211503 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:25:38.227439 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:25:38.227668 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:25:38.229163 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:25:38.232330 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:25:38.753798 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:25:38.755341 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:25:38.757165 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:25:38.758548 systemd[1]: Startup finished in 593ms (kernel) + 5.763s (initrd) + 3.332s (userspace) = 9.689s. May 15 23:25:39.177550 kubelet[1552]: E0515 23:25:39.177485 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:25:39.179895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:25:39.180051 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:25:39.180345 systemd[1]: kubelet.service: Consumed 837ms CPU time, 259.5M memory peak. May 15 23:25:42.467077 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:25:42.468173 systemd[1]: Started sshd@0-10.0.0.41:22-10.0.0.1:43382.service - OpenSSH per-connection server daemon (10.0.0.1:43382). May 15 23:25:42.545891 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 43382 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:42.547408 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:42.558649 systemd-logind[1440]: New session 1 of user core. May 15 23:25:42.559592 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:25:42.560552 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:25:42.584006 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:25:42.587204 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:25:42.622076 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:25:42.624603 systemd-logind[1440]: New session c1 of user core. May 15 23:25:42.733378 systemd[1569]: Queued start job for default target default.target. May 15 23:25:42.743653 systemd[1569]: Created slice app.slice - User Application Slice. May 15 23:25:42.743704 systemd[1569]: Reached target paths.target - Paths. May 15 23:25:42.743744 systemd[1569]: Reached target timers.target - Timers. May 15 23:25:42.744982 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:25:42.753977 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:25:42.754034 systemd[1569]: Reached target sockets.target - Sockets. May 15 23:25:42.754068 systemd[1569]: Reached target basic.target - Basic System. May 15 23:25:42.754095 systemd[1569]: Reached target default.target - Main User Target. May 15 23:25:42.754120 systemd[1569]: Startup finished in 124ms. May 15 23:25:42.754280 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:25:42.755810 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:25:42.822565 systemd[1]: Started sshd@1-10.0.0.41:22-10.0.0.1:45070.service - OpenSSH per-connection server daemon (10.0.0.1:45070). May 15 23:25:42.877144 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 45070 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:42.878737 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:42.882480 systemd-logind[1440]: New session 2 of user core. May 15 23:25:42.898890 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:25:42.948897 sshd[1582]: Connection closed by 10.0.0.1 port 45070 May 15 23:25:42.949357 sshd-session[1580]: pam_unix(sshd:session): session closed for user core May 15 23:25:42.967831 systemd[1]: sshd@1-10.0.0.41:22-10.0.0.1:45070.service: Deactivated successfully. May 15 23:25:42.969438 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:25:42.970865 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. May 15 23:25:42.972168 systemd[1]: Started sshd@2-10.0.0.41:22-10.0.0.1:45076.service - OpenSSH per-connection server daemon (10.0.0.1:45076). May 15 23:25:42.973050 systemd-logind[1440]: Removed session 2. May 15 23:25:43.017628 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 45076 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:43.018624 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:43.022725 systemd-logind[1440]: New session 3 of user core. May 15 23:25:43.039862 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:25:43.087624 sshd[1590]: Connection closed by 10.0.0.1 port 45076 May 15 23:25:43.087997 sshd-session[1587]: pam_unix(sshd:session): session closed for user core May 15 23:25:43.098746 systemd[1]: sshd@2-10.0.0.41:22-10.0.0.1:45076.service: Deactivated successfully. May 15 23:25:43.100184 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:25:43.101431 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. May 15 23:25:43.102587 systemd[1]: Started sshd@3-10.0.0.41:22-10.0.0.1:45092.service - OpenSSH per-connection server daemon (10.0.0.1:45092). May 15 23:25:43.103460 systemd-logind[1440]: Removed session 3. May 15 23:25:43.145620 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 45092 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:43.146656 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:43.150747 systemd-logind[1440]: New session 4 of user core. May 15 23:25:43.158814 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:25:43.208382 sshd[1598]: Connection closed by 10.0.0.1 port 45092 May 15 23:25:43.208640 sshd-session[1595]: pam_unix(sshd:session): session closed for user core May 15 23:25:43.217621 systemd[1]: sshd@3-10.0.0.41:22-10.0.0.1:45092.service: Deactivated successfully. May 15 23:25:43.219023 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:25:43.219609 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. May 15 23:25:43.221276 systemd[1]: Started sshd@4-10.0.0.41:22-10.0.0.1:45108.service - OpenSSH per-connection server daemon (10.0.0.1:45108). May 15 23:25:43.222067 systemd-logind[1440]: Removed session 4. May 15 23:25:43.265439 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 45108 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:43.266474 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:43.270335 systemd-logind[1440]: New session 5 of user core. May 15 23:25:43.280844 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:25:43.337320 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:25:43.337567 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:25:43.354501 sudo[1607]: pam_unix(sudo:session): session closed for user root May 15 23:25:43.356348 sshd[1606]: Connection closed by 10.0.0.1 port 45108 May 15 23:25:43.356170 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 15 23:25:43.369878 systemd[1]: sshd@4-10.0.0.41:22-10.0.0.1:45108.service: Deactivated successfully. May 15 23:25:43.372034 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:25:43.373625 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. May 15 23:25:43.375493 systemd[1]: Started sshd@5-10.0.0.41:22-10.0.0.1:45122.service - OpenSSH per-connection server daemon (10.0.0.1:45122). May 15 23:25:43.376559 systemd-logind[1440]: Removed session 5. May 15 23:25:43.427046 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 45122 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:43.428153 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:43.432156 systemd-logind[1440]: New session 6 of user core. May 15 23:25:43.439818 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:25:43.488741 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:25:43.489003 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:25:43.492027 sudo[1617]: pam_unix(sudo:session): session closed for user root May 15 23:25:43.496218 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:25:43.496475 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:25:43.504366 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:25:43.536958 augenrules[1639]: No rules May 15 23:25:43.537642 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:25:43.537858 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:25:43.538913 sudo[1616]: pam_unix(sudo:session): session closed for user root May 15 23:25:43.540010 sshd[1615]: Connection closed by 10.0.0.1 port 45122 May 15 23:25:43.540381 sshd-session[1612]: pam_unix(sshd:session): session closed for user core May 15 23:25:43.550011 systemd[1]: sshd@5-10.0.0.41:22-10.0.0.1:45122.service: Deactivated successfully. May 15 23:25:43.551326 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:25:43.552501 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. May 15 23:25:43.553520 systemd[1]: Started sshd@6-10.0.0.41:22-10.0.0.1:45130.service - OpenSSH per-connection server daemon (10.0.0.1:45130). May 15 23:25:43.555407 systemd-logind[1440]: Removed session 6. May 15 23:25:43.598876 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 45130 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:25:43.599942 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:25:43.603670 systemd-logind[1440]: New session 7 of user core. May 15 23:25:43.612816 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:25:43.662212 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:25:43.662487 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:25:43.982588 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:25:43.993992 (dockerd)[1671]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:25:44.236182 dockerd[1671]: time="2025-05-15T23:25:44.236054129Z" level=info msg="Starting up" May 15 23:25:44.237116 dockerd[1671]: time="2025-05-15T23:25:44.237085236Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 15 23:25:44.340151 dockerd[1671]: time="2025-05-15T23:25:44.340104322Z" level=info msg="Loading containers: start." May 15 23:25:44.533709 kernel: Initializing XFRM netlink socket May 15 23:25:44.589535 systemd-networkd[1389]: docker0: Link UP May 15 23:25:44.645839 dockerd[1671]: time="2025-05-15T23:25:44.645784958Z" level=info msg="Loading containers: done." May 15 23:25:44.660753 dockerd[1671]: time="2025-05-15T23:25:44.660662468Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:25:44.660876 dockerd[1671]: time="2025-05-15T23:25:44.660787299Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 15 23:25:44.662755 dockerd[1671]: time="2025-05-15T23:25:44.662728639Z" level=info msg="Daemon has completed initialization" May 15 23:25:44.689201 dockerd[1671]: time="2025-05-15T23:25:44.689152906Z" level=info msg="API listen on /run/docker.sock" May 15 23:25:44.689430 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:25:45.571192 containerd[1456]: time="2025-05-15T23:25:45.571149529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 15 23:25:46.132470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946868023.mount: Deactivated successfully. May 15 23:25:47.089249 containerd[1456]: time="2025-05-15T23:25:47.089189325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:47.090117 containerd[1456]: time="2025-05-15T23:25:47.089863058Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 15 23:25:47.090750 containerd[1456]: time="2025-05-15T23:25:47.090721520Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:47.093375 containerd[1456]: time="2025-05-15T23:25:47.093347344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:47.095186 containerd[1456]: time="2025-05-15T23:25:47.095150732Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.523953544s" May 15 23:25:47.095239 containerd[1456]: time="2025-05-15T23:25:47.095189100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 15 23:25:47.098105 containerd[1456]: time="2025-05-15T23:25:47.098068404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 15 23:25:48.258929 containerd[1456]: time="2025-05-15T23:25:48.258870214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:48.259398 containerd[1456]: time="2025-05-15T23:25:48.259346058Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 15 23:25:48.260351 containerd[1456]: time="2025-05-15T23:25:48.260282721Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:48.262656 containerd[1456]: time="2025-05-15T23:25:48.262627955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:48.263738 containerd[1456]: time="2025-05-15T23:25:48.263668969Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.16555663s" May 15 23:25:48.263797 containerd[1456]: time="2025-05-15T23:25:48.263745057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 15 23:25:48.264397 containerd[1456]: time="2025-05-15T23:25:48.264173675Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 15 23:25:49.372431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:25:49.374340 containerd[1456]: time="2025-05-15T23:25:49.374157258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:49.374227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:25:49.375478 containerd[1456]: time="2025-05-15T23:25:49.375419365Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 15 23:25:49.376578 containerd[1456]: time="2025-05-15T23:25:49.376546842Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:49.379074 containerd[1456]: time="2025-05-15T23:25:49.379009270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:49.380251 containerd[1456]: time="2025-05-15T23:25:49.379930587Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.115725223s" May 15 23:25:49.380251 containerd[1456]: time="2025-05-15T23:25:49.379966234Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 15 23:25:49.380503 containerd[1456]: time="2025-05-15T23:25:49.380451200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 15 23:25:49.485155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:25:49.488517 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:25:49.524520 kubelet[1949]: E0515 23:25:49.524469 1949 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:25:49.527606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:25:49.527793 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:25:49.528198 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.7M memory peak. May 15 23:25:50.446662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650662967.mount: Deactivated successfully. May 15 23:25:50.785175 containerd[1456]: time="2025-05-15T23:25:50.785051671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:50.785912 containerd[1456]: time="2025-05-15T23:25:50.785859157Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 15 23:25:50.786679 containerd[1456]: time="2025-05-15T23:25:50.786652947Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:50.788397 containerd[1456]: time="2025-05-15T23:25:50.788370950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:50.788959 containerd[1456]: time="2025-05-15T23:25:50.788928383Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.408440894s" May 15 23:25:50.788998 containerd[1456]: time="2025-05-15T23:25:50.788963815Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 15 23:25:50.789414 containerd[1456]: time="2025-05-15T23:25:50.789388200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:25:51.345854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280173537.mount: Deactivated successfully. May 15 23:25:52.037749 containerd[1456]: time="2025-05-15T23:25:52.037699705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:52.038150 containerd[1456]: time="2025-05-15T23:25:52.038098975Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 15 23:25:52.039183 containerd[1456]: time="2025-05-15T23:25:52.039158447Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:52.042433 containerd[1456]: time="2025-05-15T23:25:52.042395413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:52.043109 containerd[1456]: time="2025-05-15T23:25:52.043055017Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.253636273s" May 15 23:25:52.043109 containerd[1456]: time="2025-05-15T23:25:52.043084909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 23:25:52.044343 containerd[1456]: time="2025-05-15T23:25:52.044100820Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:25:52.494776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3045589726.mount: Deactivated successfully. May 15 23:25:52.499424 containerd[1456]: time="2025-05-15T23:25:52.499388169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:25:52.500279 containerd[1456]: time="2025-05-15T23:25:52.500021629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 23:25:52.500703 containerd[1456]: time="2025-05-15T23:25:52.500648114Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:25:52.502363 containerd[1456]: time="2025-05-15T23:25:52.502316535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:25:52.503309 containerd[1456]: time="2025-05-15T23:25:52.503265848Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 459.116166ms" May 15 23:25:52.503309 containerd[1456]: time="2025-05-15T23:25:52.503301279Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 23:25:52.503825 containerd[1456]: time="2025-05-15T23:25:52.503792974Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 23:25:53.018786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896400937.mount: Deactivated successfully. May 15 23:25:54.644862 containerd[1456]: time="2025-05-15T23:25:54.644818087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:54.645890 containerd[1456]: time="2025-05-15T23:25:54.645345103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 23:25:54.647176 containerd[1456]: time="2025-05-15T23:25:54.647088021Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:54.651305 containerd[1456]: time="2025-05-15T23:25:54.651273195Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.147449207s" May 15 23:25:54.651305 containerd[1456]: time="2025-05-15T23:25:54.651308457Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 23:25:54.651808 containerd[1456]: time="2025-05-15T23:25:54.651771052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:25:59.622114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 23:25:59.623537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:25:59.767136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:25:59.770001 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:25:59.809516 kubelet[2106]: E0515 23:25:59.809462 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:25:59.811143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:25:59.811272 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:25:59.812769 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.6M memory peak. May 15 23:25:59.918167 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:25:59.918306 systemd[1]: kubelet.service: Consumed 131ms CPU time, 107.6M memory peak. May 15 23:25:59.920267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:25:59.940775 systemd[1]: Reload requested from client PID 2122 ('systemctl') (unit session-7.scope)... May 15 23:25:59.940789 systemd[1]: Reloading... May 15 23:26:00.007717 zram_generator::config[2165]: No configuration found. May 15 23:26:00.308464 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:26:00.381338 systemd[1]: Reloading finished in 440 ms. May 15 23:26:00.430670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:26:00.433464 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:26:00.434076 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:26:00.434258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:26:00.434293 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.1M memory peak. May 15 23:26:00.435565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:26:00.545543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:26:00.549600 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:26:00.585187 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:26:00.585187 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:26:00.585187 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:26:00.585576 kubelet[2212]: I0515 23:26:00.585167 2212 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:26:01.379459 kubelet[2212]: I0515 23:26:01.379411 2212 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:26:01.379459 kubelet[2212]: I0515 23:26:01.379447 2212 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:26:01.379714 kubelet[2212]: I0515 23:26:01.379700 2212 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:26:01.404179 kubelet[2212]: I0515 23:26:01.404149 2212 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:26:01.404809 kubelet[2212]: E0515 23:26:01.404780 2212 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" May 15 23:26:01.412622 kubelet[2212]: I0515 23:26:01.412591 2212 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 23:26:01.416149 kubelet[2212]: I0515 23:26:01.416122 2212 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:26:01.416410 kubelet[2212]: I0515 23:26:01.416390 2212 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:26:01.416532 kubelet[2212]: I0515 23:26:01.416502 2212 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:26:01.416714 kubelet[2212]: I0515 23:26:01.416527 2212 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:26:01.416850 kubelet[2212]: I0515 23:26:01.416831 2212 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:26:01.416850 kubelet[2212]: I0515 23:26:01.416841 2212 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:26:01.417095 kubelet[2212]: I0515 23:26:01.417066 2212 state_mem.go:36] "Initialized new in-memory state store" May 15 23:26:01.419232 kubelet[2212]: I0515 23:26:01.419025 2212 kubelet.go:408] "Attempting to sync node with API server" May 15 23:26:01.419232 kubelet[2212]: I0515 23:26:01.419053 2212 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:26:01.419232 kubelet[2212]: I0515 23:26:01.419074 2212 kubelet.go:314] "Adding apiserver pod source" May 15 23:26:01.419232 kubelet[2212]: I0515 23:26:01.419148 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:26:01.422134 kubelet[2212]: W0515 23:26:01.422077 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 15 23:26:01.422185 kubelet[2212]: E0515 23:26:01.422144 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" May 15 23:26:01.423679 kubelet[2212]: W0515 23:26:01.423644 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 15 23:26:01.423745 kubelet[2212]: E0515 23:26:01.423698 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" May 15 23:26:01.424799 kubelet[2212]: I0515 23:26:01.424781 2212 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 23:26:01.425727 kubelet[2212]: I0515 23:26:01.425712 2212 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:26:01.425889 kubelet[2212]: W0515 23:26:01.425877 2212 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:26:01.427025 kubelet[2212]: I0515 23:26:01.426877 2212 server.go:1274] "Started kubelet" May 15 23:26:01.427107 kubelet[2212]: I0515 23:26:01.427039 2212 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:26:01.427452 kubelet[2212]: I0515 23:26:01.427410 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:26:01.427736 kubelet[2212]: I0515 23:26:01.427702 2212 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:26:01.428712 kubelet[2212]: I0515 23:26:01.428665 2212 server.go:449] "Adding debug handlers to kubelet server" May 15 23:26:01.430544 kubelet[2212]: I0515 23:26:01.429238 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:26:01.430544 kubelet[2212]: I0515 23:26:01.429347 2212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:26:01.430544 kubelet[2212]: I0515 23:26:01.429699 2212 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:26:01.430544 kubelet[2212]: I0515 23:26:01.429788 2212 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:26:01.430544 kubelet[2212]: I0515 23:26:01.429839 2212 reconciler.go:26] "Reconciler: start to sync state" May 15 23:26:01.430544 kubelet[2212]: W0515 23:26:01.430183 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 15 23:26:01.430544 kubelet[2212]: E0515 23:26:01.430219 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" May 15 23:26:01.430544 kubelet[2212]: E0515 23:26:01.430500 2212 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:26:01.430895 kubelet[2212]: E0515 23:26:01.430872 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="200ms" May 15 23:26:01.431956 kubelet[2212]: E0515 23:26:01.429837 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.41:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.41:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd6fcbc924f5f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:26:01.426849631 +0000 UTC m=+0.874511625,LastTimestamp:2025-05-15 23:26:01.426849631 +0000 UTC m=+0.874511625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:26:01.432249 kubelet[2212]: I0515 23:26:01.432229 2212 factory.go:221] Registration of the containerd container factory successfully May 15 23:26:01.432249 kubelet[2212]: I0515 23:26:01.432244 2212 factory.go:221] Registration of the systemd container factory successfully May 15 23:26:01.433089 kubelet[2212]: I0515 23:26:01.432322 2212 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:26:01.433089 kubelet[2212]: E0515 23:26:01.432570 2212 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:26:01.445910 kubelet[2212]: I0515 23:26:01.445873 2212 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:26:01.445910 kubelet[2212]: I0515 23:26:01.445897 2212 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:26:01.445910 kubelet[2212]: I0515 23:26:01.445915 2212 state_mem.go:36] "Initialized new in-memory state store" May 15 23:26:01.446134 kubelet[2212]: I0515 23:26:01.446094 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:26:01.447672 kubelet[2212]: I0515 23:26:01.447639 2212 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:26:01.447672 kubelet[2212]: I0515 23:26:01.447669 2212 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:26:01.447798 kubelet[2212]: I0515 23:26:01.447769 2212 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:26:01.447843 kubelet[2212]: E0515 23:26:01.447818 2212 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:26:01.523676 kubelet[2212]: I0515 23:26:01.523625 2212 policy_none.go:49] "None policy: Start" May 15 23:26:01.524038 kubelet[2212]: W0515 23:26:01.524007 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 15 23:26:01.524154 kubelet[2212]: E0515 23:26:01.524124 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" May 15 23:26:01.524381 kubelet[2212]: I0515 23:26:01.524366 2212 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:26:01.524500 kubelet[2212]: I0515 23:26:01.524392 2212 state_mem.go:35] "Initializing new in-memory state store" May 15 23:26:01.530257 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 23:26:01.531309 kubelet[2212]: E0515 23:26:01.530710 2212 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:26:01.543206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 23:26:01.545636 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 23:26:01.548229 kubelet[2212]: E0515 23:26:01.548199 2212 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 23:26:01.556366 kubelet[2212]: I0515 23:26:01.556347 2212 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:26:01.556729 kubelet[2212]: I0515 23:26:01.556543 2212 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:26:01.556729 kubelet[2212]: I0515 23:26:01.556560 2212 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:26:01.557053 kubelet[2212]: I0515 23:26:01.557012 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:26:01.558606 kubelet[2212]: E0515 23:26:01.558573 2212 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:26:01.632532 kubelet[2212]: E0515 23:26:01.631639 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="400ms" May 15 23:26:01.657640 kubelet[2212]: I0515 23:26:01.657609 2212 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:26:01.658112 kubelet[2212]: E0515 23:26:01.658080 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" May 15 23:26:01.755173 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 15 23:26:01.777514 systemd[1]: Created slice kubepods-burstable-pod270c49ff8c1fa7a95769c67d0cad6fab.slice - libcontainer container kubepods-burstable-pod270c49ff8c1fa7a95769c67d0cad6fab.slice. May 15 23:26:01.789757 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 15 23:26:01.832302 kubelet[2212]: I0515 23:26:01.832266 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/270c49ff8c1fa7a95769c67d0cad6fab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"270c49ff8c1fa7a95769c67d0cad6fab\") " pod="kube-system/kube-apiserver-localhost" May 15 23:26:01.832302 kubelet[2212]: I0515 23:26:01.832303 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:01.832393 kubelet[2212]: I0515 23:26:01.832323 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:01.832393 kubelet[2212]: I0515 23:26:01.832339 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:01.832451 kubelet[2212]: I0515 23:26:01.832399 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:26:01.832451 kubelet[2212]: I0515 23:26:01.832430 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/270c49ff8c1fa7a95769c67d0cad6fab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"270c49ff8c1fa7a95769c67d0cad6fab\") " pod="kube-system/kube-apiserver-localhost" May 15 23:26:01.832496 kubelet[2212]: I0515 23:26:01.832455 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/270c49ff8c1fa7a95769c67d0cad6fab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"270c49ff8c1fa7a95769c67d0cad6fab\") " pod="kube-system/kube-apiserver-localhost" May 15 23:26:01.832496 kubelet[2212]: I0515 23:26:01.832481 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:01.832535 kubelet[2212]: I0515 23:26:01.832497 2212 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:01.859153 kubelet[2212]: I0515 23:26:01.859105 2212 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:26:01.859462 kubelet[2212]: E0515 23:26:01.859426 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" May 15 23:26:02.032878 kubelet[2212]: E0515 23:26:02.032768 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.41:6443: connect: connection refused" interval="800ms" May 15 23:26:02.076104 kubelet[2212]: E0515 23:26:02.076058 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.076732 containerd[1456]: time="2025-05-15T23:26:02.076668039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 15 23:26:02.088921 kubelet[2212]: E0515 23:26:02.088886 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.089442 containerd[1456]: time="2025-05-15T23:26:02.089243297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:270c49ff8c1fa7a95769c67d0cad6fab,Namespace:kube-system,Attempt:0,}" May 15 23:26:02.092050 kubelet[2212]: E0515 23:26:02.092021 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.092387 containerd[1456]: time="2025-05-15T23:26:02.092349178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 15 23:26:02.112280 containerd[1456]: time="2025-05-15T23:26:02.112240226Z" level=info msg="connecting to shim 6a100955f274d39e4d122fc3e9fcb86aa3e08a113dd8251f1081bc04f888c629" address="unix:///run/containerd/s/d990657bc31f64629130f4afc8797ff90cd150e665154af78e06f9c185016207" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:02.122401 containerd[1456]: time="2025-05-15T23:26:02.121938306Z" level=info msg="connecting to shim cf1ac240f19fb515fa5f57c9011058f7895e92c62a835ea7f2fdde7bae47ac45" address="unix:///run/containerd/s/07f700fd5dca4c7ff9499d29109a754ede88dbbffebbe421c6240781e43851ba" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:02.128671 containerd[1456]: time="2025-05-15T23:26:02.128636564Z" level=info msg="connecting to shim 56992c538f155283f484fe2c5b2a311df3a67fe3ffd4d3476acc11450b48f35e" address="unix:///run/containerd/s/1142bbf00460dfa140c04d306ba894ab0669018e71962e657d76160fc3380fc3" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:02.137839 systemd[1]: Started cri-containerd-6a100955f274d39e4d122fc3e9fcb86aa3e08a113dd8251f1081bc04f888c629.scope - libcontainer container 6a100955f274d39e4d122fc3e9fcb86aa3e08a113dd8251f1081bc04f888c629. May 15 23:26:02.141106 systemd[1]: Started cri-containerd-cf1ac240f19fb515fa5f57c9011058f7895e92c62a835ea7f2fdde7bae47ac45.scope - libcontainer container cf1ac240f19fb515fa5f57c9011058f7895e92c62a835ea7f2fdde7bae47ac45. May 15 23:26:02.162840 systemd[1]: Started cri-containerd-56992c538f155283f484fe2c5b2a311df3a67fe3ffd4d3476acc11450b48f35e.scope - libcontainer container 56992c538f155283f484fe2c5b2a311df3a67fe3ffd4d3476acc11450b48f35e. May 15 23:26:02.184887 containerd[1456]: time="2025-05-15T23:26:02.184545615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a100955f274d39e4d122fc3e9fcb86aa3e08a113dd8251f1081bc04f888c629\"" May 15 23:26:02.187038 kubelet[2212]: E0515 23:26:02.185523 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.188959 containerd[1456]: time="2025-05-15T23:26:02.188107941Z" level=info msg="CreateContainer within sandbox \"6a100955f274d39e4d122fc3e9fcb86aa3e08a113dd8251f1081bc04f888c629\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:26:02.190933 containerd[1456]: time="2025-05-15T23:26:02.189193027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:270c49ff8c1fa7a95769c67d0cad6fab,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf1ac240f19fb515fa5f57c9011058f7895e92c62a835ea7f2fdde7bae47ac45\"" May 15 23:26:02.190976 kubelet[2212]: E0515 23:26:02.190089 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.192470 containerd[1456]: time="2025-05-15T23:26:02.192432420Z" level=info msg="CreateContainer within sandbox \"cf1ac240f19fb515fa5f57c9011058f7895e92c62a835ea7f2fdde7bae47ac45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:26:02.198851 containerd[1456]: time="2025-05-15T23:26:02.198808945Z" level=info msg="Container 15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:02.203031 containerd[1456]: time="2025-05-15T23:26:02.202995196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"56992c538f155283f484fe2c5b2a311df3a67fe3ffd4d3476acc11450b48f35e\"" May 15 23:26:02.203210 containerd[1456]: time="2025-05-15T23:26:02.203176184Z" level=info msg="Container 3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:02.203848 kubelet[2212]: E0515 23:26:02.203823 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.205188 containerd[1456]: time="2025-05-15T23:26:02.205063865Z" level=info msg="CreateContainer within sandbox \"56992c538f155283f484fe2c5b2a311df3a67fe3ffd4d3476acc11450b48f35e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:26:02.208142 containerd[1456]: time="2025-05-15T23:26:02.208110043Z" level=info msg="CreateContainer within sandbox \"6a100955f274d39e4d122fc3e9fcb86aa3e08a113dd8251f1081bc04f888c629\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b\"" May 15 23:26:02.210638 containerd[1456]: time="2025-05-15T23:26:02.210611500Z" level=info msg="StartContainer for \"15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b\"" May 15 23:26:02.211684 containerd[1456]: time="2025-05-15T23:26:02.211636483Z" level=info msg="connecting to shim 15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b" address="unix:///run/containerd/s/d990657bc31f64629130f4afc8797ff90cd150e665154af78e06f9c185016207" protocol=ttrpc version=3 May 15 23:26:02.212160 containerd[1456]: time="2025-05-15T23:26:02.212127615Z" level=info msg="CreateContainer within sandbox \"cf1ac240f19fb515fa5f57c9011058f7895e92c62a835ea7f2fdde7bae47ac45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada\"" May 15 23:26:02.212561 containerd[1456]: time="2025-05-15T23:26:02.212469529Z" level=info msg="StartContainer for \"3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada\"" May 15 23:26:02.213408 containerd[1456]: time="2025-05-15T23:26:02.213375346Z" level=info msg="connecting to shim 3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada" address="unix:///run/containerd/s/07f700fd5dca4c7ff9499d29109a754ede88dbbffebbe421c6240781e43851ba" protocol=ttrpc version=3 May 15 23:26:02.214098 containerd[1456]: time="2025-05-15T23:26:02.214061812Z" level=info msg="Container 52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:02.220008 containerd[1456]: time="2025-05-15T23:26:02.219969064Z" level=info msg="CreateContainer within sandbox \"56992c538f155283f484fe2c5b2a311df3a67fe3ffd4d3476acc11450b48f35e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd\"" May 15 23:26:02.220525 containerd[1456]: time="2025-05-15T23:26:02.220500438Z" level=info msg="StartContainer for \"52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd\"" May 15 23:26:02.221535 containerd[1456]: time="2025-05-15T23:26:02.221500645Z" level=info msg="connecting to shim 52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd" address="unix:///run/containerd/s/1142bbf00460dfa140c04d306ba894ab0669018e71962e657d76160fc3380fc3" protocol=ttrpc version=3 May 15 23:26:02.233907 systemd[1]: Started cri-containerd-15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b.scope - libcontainer container 15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b. May 15 23:26:02.235109 systemd[1]: Started cri-containerd-3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada.scope - libcontainer container 3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada. May 15 23:26:02.239987 systemd[1]: Started cri-containerd-52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd.scope - libcontainer container 52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd. May 15 23:26:02.261064 kubelet[2212]: I0515 23:26:02.261031 2212 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:26:02.261859 kubelet[2212]: E0515 23:26:02.261393 2212 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.41:6443/api/v1/nodes\": dial tcp 10.0.0.41:6443: connect: connection refused" node="localhost" May 15 23:26:02.290860 containerd[1456]: time="2025-05-15T23:26:02.290128457Z" level=info msg="StartContainer for \"52b8dffdc00b4efc567727d65db9e80789a4c609405777dea3450c5fcec45ccd\" returns successfully" May 15 23:26:02.296105 containerd[1456]: time="2025-05-15T23:26:02.295978603Z" level=info msg="StartContainer for \"15af6d68b94ded05ed94b871f33f715836fc88b577838f990f01ffcb1bad932b\" returns successfully" May 15 23:26:02.311527 containerd[1456]: time="2025-05-15T23:26:02.311479834Z" level=info msg="StartContainer for \"3e16bfa214b989b76d69c5507f7befa9886b1e747c0c9848ea37d66aa7591ada\" returns successfully" May 15 23:26:02.441992 kubelet[2212]: W0515 23:26:02.441822 2212 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.41:6443: connect: connection refused May 15 23:26:02.441992 kubelet[2212]: E0515 23:26:02.441961 2212 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.41:6443: connect: connection refused" logger="UnhandledError" May 15 23:26:02.456119 kubelet[2212]: E0515 23:26:02.456096 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.461498 kubelet[2212]: E0515 23:26:02.461383 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:02.463260 kubelet[2212]: E0515 23:26:02.463243 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:03.063707 kubelet[2212]: I0515 23:26:03.063671 2212 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:26:03.464916 kubelet[2212]: E0515 23:26:03.464888 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:03.947214 kubelet[2212]: E0515 23:26:03.947188 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:04.013941 kubelet[2212]: E0515 23:26:04.013861 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 23:26:04.114351 kubelet[2212]: I0515 23:26:04.113725 2212 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:26:04.421089 kubelet[2212]: I0515 23:26:04.421057 2212 apiserver.go:52] "Watching apiserver" May 15 23:26:04.430382 kubelet[2212]: I0515 23:26:04.430357 2212 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:26:06.017803 systemd[1]: Reload requested from client PID 2484 ('systemctl') (unit session-7.scope)... May 15 23:26:06.017818 systemd[1]: Reloading... May 15 23:26:06.088795 zram_generator::config[2526]: No configuration found. May 15 23:26:06.171965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:26:06.256344 systemd[1]: Reloading finished in 238 ms. May 15 23:26:06.275263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:26:06.289531 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:26:06.289816 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:26:06.289871 systemd[1]: kubelet.service: Consumed 1.254s CPU time, 129.5M memory peak. May 15 23:26:06.291506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:26:06.421436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:26:06.426749 (kubelet)[2570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:26:06.464016 kubelet[2570]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:26:06.464016 kubelet[2570]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:26:06.464016 kubelet[2570]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:26:06.464332 kubelet[2570]: I0515 23:26:06.464059 2570 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:26:06.470406 kubelet[2570]: I0515 23:26:06.470371 2570 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:26:06.470406 kubelet[2570]: I0515 23:26:06.470398 2570 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:26:06.470641 kubelet[2570]: I0515 23:26:06.470615 2570 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:26:06.472001 kubelet[2570]: I0515 23:26:06.471980 2570 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:26:06.474079 kubelet[2570]: I0515 23:26:06.474051 2570 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:26:06.477315 kubelet[2570]: I0515 23:26:06.477297 2570 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 15 23:26:06.480098 kubelet[2570]: I0515 23:26:06.479893 2570 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:26:06.480098 kubelet[2570]: I0515 23:26:06.479994 2570 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:26:06.480201 kubelet[2570]: I0515 23:26:06.480168 2570 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:26:06.480492 kubelet[2570]: I0515 23:26:06.480205 2570 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 23:26:06.480579 kubelet[2570]: I0515 23:26:06.480495 2570 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:26:06.480579 kubelet[2570]: I0515 23:26:06.480506 2570 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:26:06.480579 kubelet[2570]: I0515 23:26:06.480538 2570 state_mem.go:36] "Initialized new in-memory state store" May 15 23:26:06.480658 kubelet[2570]: I0515 23:26:06.480638 2570 kubelet.go:408] "Attempting to sync node with API server" May 15 23:26:06.480658 kubelet[2570]: I0515 23:26:06.480651 2570 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:26:06.480708 kubelet[2570]: I0515 23:26:06.480667 2570 kubelet.go:314] "Adding apiserver pod source" May 15 23:26:06.480708 kubelet[2570]: I0515 23:26:06.480680 2570 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:26:06.481720 kubelet[2570]: I0515 23:26:06.481662 2570 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 15 23:26:06.483167 kubelet[2570]: I0515 23:26:06.483147 2570 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:26:06.483717 kubelet[2570]: I0515 23:26:06.483700 2570 server.go:1274] "Started kubelet" May 15 23:26:06.484336 kubelet[2570]: I0515 23:26:06.484295 2570 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:26:06.484544 kubelet[2570]: I0515 23:26:06.484521 2570 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:26:06.484605 kubelet[2570]: I0515 23:26:06.484574 2570 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:26:06.484950 kubelet[2570]: I0515 23:26:06.484935 2570 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:26:06.485191 kubelet[2570]: I0515 23:26:06.485154 2570 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:26:06.485346 kubelet[2570]: I0515 23:26:06.485327 2570 server.go:449] "Adding debug handlers to kubelet server" May 15 23:26:06.486398 kubelet[2570]: I0515 23:26:06.486376 2570 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:26:06.486483 kubelet[2570]: I0515 23:26:06.486469 2570 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:26:06.486601 kubelet[2570]: I0515 23:26:06.486590 2570 reconciler.go:26] "Reconciler: start to sync state" May 15 23:26:06.487293 kubelet[2570]: I0515 23:26:06.487275 2570 factory.go:221] Registration of the systemd container factory successfully May 15 23:26:06.487375 kubelet[2570]: I0515 23:26:06.487358 2570 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:26:06.487883 kubelet[2570]: E0515 23:26:06.487857 2570 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:26:06.488090 kubelet[2570]: E0515 23:26:06.488066 2570 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:26:06.488646 kubelet[2570]: I0515 23:26:06.488615 2570 factory.go:221] Registration of the containerd container factory successfully May 15 23:26:06.506584 kubelet[2570]: I0515 23:26:06.506476 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:26:06.507559 kubelet[2570]: I0515 23:26:06.507286 2570 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:26:06.507559 kubelet[2570]: I0515 23:26:06.507305 2570 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:26:06.507559 kubelet[2570]: I0515 23:26:06.507320 2570 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:26:06.507559 kubelet[2570]: E0515 23:26:06.507357 2570 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:26:06.540723 kubelet[2570]: I0515 23:26:06.540623 2570 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:26:06.540723 kubelet[2570]: I0515 23:26:06.540645 2570 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:26:06.540723 kubelet[2570]: I0515 23:26:06.540665 2570 state_mem.go:36] "Initialized new in-memory state store" May 15 23:26:06.540851 kubelet[2570]: I0515 23:26:06.540819 2570 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:26:06.540851 kubelet[2570]: I0515 23:26:06.540830 2570 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:26:06.540851 kubelet[2570]: I0515 23:26:06.540848 2570 policy_none.go:49] "None policy: Start" May 15 23:26:06.541462 kubelet[2570]: I0515 23:26:06.541442 2570 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:26:06.541462 kubelet[2570]: I0515 23:26:06.541468 2570 state_mem.go:35] "Initializing new in-memory state store" May 15 23:26:06.541650 kubelet[2570]: I0515 23:26:06.541614 2570 state_mem.go:75] "Updated machine memory state" May 15 23:26:06.545398 kubelet[2570]: I0515 23:26:06.545374 2570 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:26:06.545527 kubelet[2570]: I0515 23:26:06.545513 2570 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:26:06.545581 kubelet[2570]: I0515 23:26:06.545529 2570 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:26:06.546007 kubelet[2570]: I0515 23:26:06.545939 2570 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:26:06.648941 kubelet[2570]: I0515 23:26:06.648821 2570 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:26:06.654273 kubelet[2570]: I0515 23:26:06.654242 2570 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 23:26:06.654333 kubelet[2570]: I0515 23:26:06.654325 2570 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:26:06.687727 kubelet[2570]: I0515 23:26:06.687682 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/270c49ff8c1fa7a95769c67d0cad6fab-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"270c49ff8c1fa7a95769c67d0cad6fab\") " pod="kube-system/kube-apiserver-localhost" May 15 23:26:06.687727 kubelet[2570]: I0515 23:26:06.687726 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/270c49ff8c1fa7a95769c67d0cad6fab-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"270c49ff8c1fa7a95769c67d0cad6fab\") " pod="kube-system/kube-apiserver-localhost" May 15 23:26:06.687913 kubelet[2570]: I0515 23:26:06.687745 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:26:06.687913 kubelet[2570]: I0515 23:26:06.687763 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/270c49ff8c1fa7a95769c67d0cad6fab-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"270c49ff8c1fa7a95769c67d0cad6fab\") " pod="kube-system/kube-apiserver-localhost" May 15 23:26:06.687913 kubelet[2570]: I0515 23:26:06.687781 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:06.687913 kubelet[2570]: I0515 23:26:06.687805 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:06.687913 kubelet[2570]: I0515 23:26:06.687822 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:06.688029 kubelet[2570]: I0515 23:26:06.687838 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:06.688029 kubelet[2570]: I0515 23:26:06.687855 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:26:06.923264 kubelet[2570]: E0515 23:26:06.923235 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:06.924250 kubelet[2570]: E0515 23:26:06.924223 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:06.924300 kubelet[2570]: E0515 23:26:06.924265 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:07.020936 sudo[2609]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:26:07.021205 sudo[2609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:26:07.464762 sudo[2609]: pam_unix(sudo:session): session closed for user root May 15 23:26:07.481420 kubelet[2570]: I0515 23:26:07.481384 2570 apiserver.go:52] "Watching apiserver" May 15 23:26:07.487125 kubelet[2570]: I0515 23:26:07.487098 2570 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:26:07.532245 kubelet[2570]: E0515 23:26:07.529781 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:07.532245 kubelet[2570]: E0515 23:26:07.529805 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:07.542263 kubelet[2570]: E0515 23:26:07.541817 2570 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:26:07.542263 kubelet[2570]: E0515 23:26:07.542060 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:07.551068 kubelet[2570]: I0515 23:26:07.550994 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.550961364 podStartE2EDuration="1.550961364s" podCreationTimestamp="2025-05-15 23:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:26:07.550940814 +0000 UTC m=+1.120633308" watchObservedRunningTime="2025-05-15 23:26:07.550961364 +0000 UTC m=+1.120653858" May 15 23:26:07.565717 kubelet[2570]: I0515 23:26:07.565621 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.5656071310000002 podStartE2EDuration="1.565607131s" podCreationTimestamp="2025-05-15 23:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:26:07.558937428 +0000 UTC m=+1.128629922" watchObservedRunningTime="2025-05-15 23:26:07.565607131 +0000 UTC m=+1.135299624" May 15 23:26:07.566035 kubelet[2570]: I0515 23:26:07.565743 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.565737787 podStartE2EDuration="1.565737787s" podCreationTimestamp="2025-05-15 23:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:26:07.565555396 +0000 UTC m=+1.135247890" watchObservedRunningTime="2025-05-15 23:26:07.565737787 +0000 UTC m=+1.135430281" May 15 23:26:08.535384 kubelet[2570]: E0515 23:26:08.535299 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:09.164058 sudo[1651]: pam_unix(sudo:session): session closed for user root May 15 23:26:09.165215 sshd[1650]: Connection closed by 10.0.0.1 port 45130 May 15 23:26:09.165705 sshd-session[1647]: pam_unix(sshd:session): session closed for user core May 15 23:26:09.169028 systemd[1]: sshd@6-10.0.0.41:22-10.0.0.1:45130.service: Deactivated successfully. May 15 23:26:09.171288 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:26:09.171477 systemd[1]: session-7.scope: Consumed 7.208s CPU time, 263.6M memory peak. May 15 23:26:09.172411 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. May 15 23:26:09.173165 systemd-logind[1440]: Removed session 7. May 15 23:26:09.455649 kubelet[2570]: E0515 23:26:09.455544 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:09.536636 kubelet[2570]: E0515 23:26:09.536610 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:10.940627 kubelet[2570]: E0515 23:26:10.940596 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:11.765986 kubelet[2570]: I0515 23:26:11.765951 2570 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:26:11.766464 containerd[1456]: time="2025-05-15T23:26:11.766427832Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:26:11.767004 kubelet[2570]: I0515 23:26:11.766612 2570 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:26:12.822341 systemd[1]: Created slice kubepods-besteffort-pode6eb9c14_727f_4ff1_8783_91c6a9ddf896.slice - libcontainer container kubepods-besteffort-pode6eb9c14_727f_4ff1_8783_91c6a9ddf896.slice. May 15 23:26:12.828253 kubelet[2570]: I0515 23:26:12.828199 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-bpf-maps\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828499 kubelet[2570]: I0515 23:26:12.828265 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-config-path\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828499 kubelet[2570]: I0515 23:26:12.828292 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6khb\" (UniqueName: \"kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-kube-api-access-s6khb\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828499 kubelet[2570]: I0515 23:26:12.828329 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-lib-modules\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828499 kubelet[2570]: I0515 23:26:12.828351 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2562d114-4fd6-4bb9-8af3-2b847b20d342-clustermesh-secrets\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828499 kubelet[2570]: I0515 23:26:12.828368 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-hubble-tls\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828618 kubelet[2570]: I0515 23:26:12.828389 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-net\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828618 kubelet[2570]: I0515 23:26:12.828407 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6eb9c14-727f-4ff1-8783-91c6a9ddf896-kube-proxy\") pod \"kube-proxy-tzvp7\" (UID: \"e6eb9c14-727f-4ff1-8783-91c6a9ddf896\") " pod="kube-system/kube-proxy-tzvp7" May 15 23:26:12.828618 kubelet[2570]: I0515 23:26:12.828425 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-hostproc\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828618 kubelet[2570]: I0515 23:26:12.828444 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-cgroup\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828618 kubelet[2570]: I0515 23:26:12.828461 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6eb9c14-727f-4ff1-8783-91c6a9ddf896-xtables-lock\") pod \"kube-proxy-tzvp7\" (UID: \"e6eb9c14-727f-4ff1-8783-91c6a9ddf896\") " pod="kube-system/kube-proxy-tzvp7" May 15 23:26:12.828755 kubelet[2570]: I0515 23:26:12.828480 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfdqn\" (UniqueName: \"kubernetes.io/projected/e6eb9c14-727f-4ff1-8783-91c6a9ddf896-kube-api-access-nfdqn\") pod \"kube-proxy-tzvp7\" (UID: \"e6eb9c14-727f-4ff1-8783-91c6a9ddf896\") " pod="kube-system/kube-proxy-tzvp7" May 15 23:26:12.828755 kubelet[2570]: I0515 23:26:12.828498 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-etc-cni-netd\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828755 kubelet[2570]: I0515 23:26:12.828590 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-kernel\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828755 kubelet[2570]: I0515 23:26:12.828613 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-run\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828755 kubelet[2570]: I0515 23:26:12.828642 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-xtables-lock\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828755 kubelet[2570]: I0515 23:26:12.828661 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cni-path\") pod \"cilium-b7fvm\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " pod="kube-system/cilium-b7fvm" May 15 23:26:12.828884 kubelet[2570]: I0515 23:26:12.828684 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6eb9c14-727f-4ff1-8783-91c6a9ddf896-lib-modules\") pod \"kube-proxy-tzvp7\" (UID: \"e6eb9c14-727f-4ff1-8783-91c6a9ddf896\") " pod="kube-system/kube-proxy-tzvp7" May 15 23:26:12.834563 systemd[1]: Created slice kubepods-burstable-pod2562d114_4fd6_4bb9_8af3_2b847b20d342.slice - libcontainer container kubepods-burstable-pod2562d114_4fd6_4bb9_8af3_2b847b20d342.slice. May 15 23:26:12.874904 systemd[1]: Created slice kubepods-besteffort-pod1aee7d82_e01a_4611_ad4e_cff9df216cbe.slice - libcontainer container kubepods-besteffort-pod1aee7d82_e01a_4611_ad4e_cff9df216cbe.slice. May 15 23:26:12.929214 kubelet[2570]: I0515 23:26:12.929159 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aee7d82-e01a-4611-ad4e-cff9df216cbe-cilium-config-path\") pod \"cilium-operator-5d85765b45-9c6h4\" (UID: \"1aee7d82-e01a-4611-ad4e-cff9df216cbe\") " pod="kube-system/cilium-operator-5d85765b45-9c6h4" May 15 23:26:12.929214 kubelet[2570]: I0515 23:26:12.929208 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rtqb\" (UniqueName: \"kubernetes.io/projected/1aee7d82-e01a-4611-ad4e-cff9df216cbe-kube-api-access-6rtqb\") pod \"cilium-operator-5d85765b45-9c6h4\" (UID: \"1aee7d82-e01a-4611-ad4e-cff9df216cbe\") " pod="kube-system/cilium-operator-5d85765b45-9c6h4" May 15 23:26:13.132144 kubelet[2570]: E0515 23:26:13.132066 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.132616 containerd[1456]: time="2025-05-15T23:26:13.132584067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzvp7,Uid:e6eb9c14-727f-4ff1-8783-91c6a9ddf896,Namespace:kube-system,Attempt:0,}" May 15 23:26:13.138250 kubelet[2570]: E0515 23:26:13.138219 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.139196 containerd[1456]: time="2025-05-15T23:26:13.139095332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7fvm,Uid:2562d114-4fd6-4bb9-8af3-2b847b20d342,Namespace:kube-system,Attempt:0,}" May 15 23:26:13.149997 containerd[1456]: time="2025-05-15T23:26:13.149955413Z" level=info msg="connecting to shim 64efd62e5f86e92dc2797a075de4318174d5240bda68cb46809fde76df9f09af" address="unix:///run/containerd/s/070550b309494d7a2b56f501baa6f6e384299b8e8608937209d0ccc25b7595ef" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:13.156871 containerd[1456]: time="2025-05-15T23:26:13.156793181Z" level=info msg="connecting to shim c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721" address="unix:///run/containerd/s/0bd756e1f945f10ce43aa7e3793e95576b36420ce232dc1793f40cf8785f2731" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:13.170844 systemd[1]: Started cri-containerd-64efd62e5f86e92dc2797a075de4318174d5240bda68cb46809fde76df9f09af.scope - libcontainer container 64efd62e5f86e92dc2797a075de4318174d5240bda68cb46809fde76df9f09af. May 15 23:26:13.173675 systemd[1]: Started cri-containerd-c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721.scope - libcontainer container c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721. May 15 23:26:13.180388 kubelet[2570]: E0515 23:26:13.180363 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.180958 containerd[1456]: time="2025-05-15T23:26:13.180903860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9c6h4,Uid:1aee7d82-e01a-4611-ad4e-cff9df216cbe,Namespace:kube-system,Attempt:0,}" May 15 23:26:13.197531 containerd[1456]: time="2025-05-15T23:26:13.197461208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tzvp7,Uid:e6eb9c14-727f-4ff1-8783-91c6a9ddf896,Namespace:kube-system,Attempt:0,} returns sandbox id \"64efd62e5f86e92dc2797a075de4318174d5240bda68cb46809fde76df9f09af\"" May 15 23:26:13.198118 kubelet[2570]: E0515 23:26:13.198047 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.200139 containerd[1456]: time="2025-05-15T23:26:13.200100352Z" level=info msg="CreateContainer within sandbox \"64efd62e5f86e92dc2797a075de4318174d5240bda68cb46809fde76df9f09af\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:26:13.207609 containerd[1456]: time="2025-05-15T23:26:13.207143230Z" level=info msg="connecting to shim fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7" address="unix:///run/containerd/s/cf0adff0d3d77154a18b83884649aefaad4bbb68e0740a503e85a425787ac3ff" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:13.207609 containerd[1456]: time="2025-05-15T23:26:13.207261584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7fvm,Uid:2562d114-4fd6-4bb9-8af3-2b847b20d342,Namespace:kube-system,Attempt:0,} returns sandbox id \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\"" May 15 23:26:13.207846 kubelet[2570]: E0515 23:26:13.207813 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.208948 containerd[1456]: time="2025-05-15T23:26:13.208914939Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:26:13.209022 containerd[1456]: time="2025-05-15T23:26:13.208968856Z" level=info msg="Container ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:13.216527 containerd[1456]: time="2025-05-15T23:26:13.216492469Z" level=info msg="CreateContainer within sandbox \"64efd62e5f86e92dc2797a075de4318174d5240bda68cb46809fde76df9f09af\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61\"" May 15 23:26:13.217759 containerd[1456]: time="2025-05-15T23:26:13.217722966Z" level=info msg="StartContainer for \"ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61\"" May 15 23:26:13.220294 containerd[1456]: time="2025-05-15T23:26:13.220249476Z" level=info msg="connecting to shim ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61" address="unix:///run/containerd/s/070550b309494d7a2b56f501baa6f6e384299b8e8608937209d0ccc25b7595ef" protocol=ttrpc version=3 May 15 23:26:13.230844 systemd[1]: Started cri-containerd-fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7.scope - libcontainer container fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7. May 15 23:26:13.233740 systemd[1]: Started cri-containerd-ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61.scope - libcontainer container ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61. May 15 23:26:13.261111 containerd[1456]: time="2025-05-15T23:26:13.261063495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9c6h4,Uid:1aee7d82-e01a-4611-ad4e-cff9df216cbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\"" May 15 23:26:13.261783 kubelet[2570]: E0515 23:26:13.261761 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.271543 containerd[1456]: time="2025-05-15T23:26:13.271450201Z" level=info msg="StartContainer for \"ac974dd161975eb0e2149a0783fe0c2f59ebec84916dc63f0e0ebe5a20584b61\" returns successfully" May 15 23:26:13.547214 kubelet[2570]: E0515 23:26:13.547014 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:13.556484 kubelet[2570]: I0515 23:26:13.556430 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tzvp7" podStartSLOduration=1.5564127760000002 podStartE2EDuration="1.556412776s" podCreationTimestamp="2025-05-15 23:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:26:13.556408936 +0000 UTC m=+7.126101430" watchObservedRunningTime="2025-05-15 23:26:13.556412776 +0000 UTC m=+7.126105230" May 15 23:26:18.729053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831715815.mount: Deactivated successfully. May 15 23:26:19.196307 kubelet[2570]: E0515 23:26:19.196085 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:19.471492 kubelet[2570]: E0515 23:26:19.471134 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:20.036972 containerd[1456]: time="2025-05-15T23:26:20.036910217Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:26:20.037528 containerd[1456]: time="2025-05-15T23:26:20.037468117Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 23:26:20.038158 containerd[1456]: time="2025-05-15T23:26:20.038134213Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:26:20.040231 containerd[1456]: time="2025-05-15T23:26:20.040184621Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.831236723s" May 15 23:26:20.040231 containerd[1456]: time="2025-05-15T23:26:20.040221540Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 23:26:20.044183 containerd[1456]: time="2025-05-15T23:26:20.044154321Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:26:20.045047 containerd[1456]: time="2025-05-15T23:26:20.045014411Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:26:20.051635 containerd[1456]: time="2025-05-15T23:26:20.051403426Z" level=info msg="Container 73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:20.056327 containerd[1456]: time="2025-05-15T23:26:20.056277854Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\"" May 15 23:26:20.057725 containerd[1456]: time="2025-05-15T23:26:20.056876073Z" level=info msg="StartContainer for \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\"" May 15 23:26:20.057725 containerd[1456]: time="2025-05-15T23:26:20.057599488Z" level=info msg="connecting to shim 73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df" address="unix:///run/containerd/s/0bd756e1f945f10ce43aa7e3793e95576b36420ce232dc1793f40cf8785f2731" protocol=ttrpc version=3 May 15 23:26:20.093824 systemd[1]: Started cri-containerd-73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df.scope - libcontainer container 73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df. May 15 23:26:20.120005 containerd[1456]: time="2025-05-15T23:26:20.119957012Z" level=info msg="StartContainer for \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" returns successfully" May 15 23:26:20.177573 systemd[1]: cri-containerd-73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df.scope: Deactivated successfully. May 15 23:26:20.201036 containerd[1456]: time="2025-05-15T23:26:20.200916360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" id:\"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" pid:2993 exited_at:{seconds:1747351580 nanos:198167337}" May 15 23:26:20.204642 containerd[1456]: time="2025-05-15T23:26:20.204608590Z" level=info msg="received exit event container_id:\"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" id:\"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" pid:2993 exited_at:{seconds:1747351580 nanos:198167337}" May 15 23:26:20.234984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df-rootfs.mount: Deactivated successfully. May 15 23:26:20.562245 kubelet[2570]: E0515 23:26:20.562218 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:20.565199 containerd[1456]: time="2025-05-15T23:26:20.565134773Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:26:20.574604 containerd[1456]: time="2025-05-15T23:26:20.573932663Z" level=info msg="Container 5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:20.580738 containerd[1456]: time="2025-05-15T23:26:20.580636507Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\"" May 15 23:26:20.581732 containerd[1456]: time="2025-05-15T23:26:20.581102691Z" level=info msg="StartContainer for \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\"" May 15 23:26:20.581963 containerd[1456]: time="2025-05-15T23:26:20.581940421Z" level=info msg="connecting to shim 5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299" address="unix:///run/containerd/s/0bd756e1f945f10ce43aa7e3793e95576b36420ce232dc1793f40cf8785f2731" protocol=ttrpc version=3 May 15 23:26:20.615962 systemd[1]: Started cri-containerd-5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299.scope - libcontainer container 5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299. May 15 23:26:20.642487 containerd[1456]: time="2025-05-15T23:26:20.642204419Z" level=info msg="StartContainer for \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" returns successfully" May 15 23:26:20.652857 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:26:20.653329 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:26:20.653604 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:26:20.655103 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:26:20.655497 systemd[1]: cri-containerd-5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299.scope: Deactivated successfully. May 15 23:26:20.655819 containerd[1456]: time="2025-05-15T23:26:20.655537869Z" level=info msg="received exit event container_id:\"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" id:\"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" pid:3038 exited_at:{seconds:1747351580 nanos:655358236}" May 15 23:26:20.655870 containerd[1456]: time="2025-05-15T23:26:20.655817980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" id:\"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" pid:3038 exited_at:{seconds:1747351580 nanos:655358236}" May 15 23:26:20.693329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:26:20.954987 kubelet[2570]: E0515 23:26:20.954951 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:21.484963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725837700.mount: Deactivated successfully. May 15 23:26:21.571743 kubelet[2570]: E0515 23:26:21.571712 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:21.574465 containerd[1456]: time="2025-05-15T23:26:21.574375289Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:26:21.586003 containerd[1456]: time="2025-05-15T23:26:21.585969021Z" level=info msg="Container 8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:21.595182 containerd[1456]: time="2025-05-15T23:26:21.595143875Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\"" May 15 23:26:21.595781 containerd[1456]: time="2025-05-15T23:26:21.595727935Z" level=info msg="StartContainer for \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\"" May 15 23:26:21.597505 containerd[1456]: time="2025-05-15T23:26:21.597465477Z" level=info msg="connecting to shim 8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916" address="unix:///run/containerd/s/0bd756e1f945f10ce43aa7e3793e95576b36420ce232dc1793f40cf8785f2731" protocol=ttrpc version=3 May 15 23:26:21.620002 systemd[1]: Started cri-containerd-8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916.scope - libcontainer container 8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916. May 15 23:26:21.664455 containerd[1456]: time="2025-05-15T23:26:21.664403559Z" level=info msg="StartContainer for \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" returns successfully" May 15 23:26:21.665026 systemd[1]: cri-containerd-8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916.scope: Deactivated successfully. May 15 23:26:21.667177 containerd[1456]: time="2025-05-15T23:26:21.667115788Z" level=info msg="received exit event container_id:\"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" id:\"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" pid:3097 exited_at:{seconds:1747351581 nanos:666950073}" May 15 23:26:21.667569 containerd[1456]: time="2025-05-15T23:26:21.667545773Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" id:\"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" pid:3097 exited_at:{seconds:1747351581 nanos:666950073}" May 15 23:26:21.834080 containerd[1456]: time="2025-05-15T23:26:21.833968648Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:26:21.835146 containerd[1456]: time="2025-05-15T23:26:21.835074211Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 23:26:21.836051 containerd[1456]: time="2025-05-15T23:26:21.836001660Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:26:21.837346 containerd[1456]: time="2025-05-15T23:26:21.837315816Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.793125416s" May 15 23:26:21.837421 containerd[1456]: time="2025-05-15T23:26:21.837351375Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 23:26:21.840051 containerd[1456]: time="2025-05-15T23:26:21.840020006Z" level=info msg="CreateContainer within sandbox \"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:26:21.853724 containerd[1456]: time="2025-05-15T23:26:21.853387839Z" level=info msg="Container 03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:21.858269 containerd[1456]: time="2025-05-15T23:26:21.858224917Z" level=info msg="CreateContainer within sandbox \"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\"" May 15 23:26:21.858911 containerd[1456]: time="2025-05-15T23:26:21.858712581Z" level=info msg="StartContainer for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\"" May 15 23:26:21.859710 containerd[1456]: time="2025-05-15T23:26:21.859551673Z" level=info msg="connecting to shim 03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389" address="unix:///run/containerd/s/cf0adff0d3d77154a18b83884649aefaad4bbb68e0740a503e85a425787ac3ff" protocol=ttrpc version=3 May 15 23:26:21.881878 systemd[1]: Started cri-containerd-03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389.scope - libcontainer container 03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389. May 15 23:26:21.915508 containerd[1456]: time="2025-05-15T23:26:21.915359686Z" level=info msg="StartContainer for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" returns successfully" May 15 23:26:22.574993 kubelet[2570]: E0515 23:26:22.574949 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:22.578882 kubelet[2570]: E0515 23:26:22.578860 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:22.580717 containerd[1456]: time="2025-05-15T23:26:22.580587287Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:26:22.592711 containerd[1456]: time="2025-05-15T23:26:22.591718013Z" level=info msg="Container 381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:22.600801 containerd[1456]: time="2025-05-15T23:26:22.600647650Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\"" May 15 23:26:22.602752 containerd[1456]: time="2025-05-15T23:26:22.601612779Z" level=info msg="StartContainer for \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\"" May 15 23:26:22.602752 containerd[1456]: time="2025-05-15T23:26:22.602389674Z" level=info msg="connecting to shim 381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5" address="unix:///run/containerd/s/0bd756e1f945f10ce43aa7e3793e95576b36420ce232dc1793f40cf8785f2731" protocol=ttrpc version=3 May 15 23:26:22.624140 kubelet[2570]: I0515 23:26:22.624082 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-9c6h4" podStartSLOduration=2.048303466 podStartE2EDuration="10.624061946s" podCreationTimestamp="2025-05-15 23:26:12 +0000 UTC" firstStartedPulling="2025-05-15 23:26:13.262283272 +0000 UTC m=+6.831975766" lastFinishedPulling="2025-05-15 23:26:21.838041752 +0000 UTC m=+15.407734246" observedRunningTime="2025-05-15 23:26:22.598562396 +0000 UTC m=+16.168254890" watchObservedRunningTime="2025-05-15 23:26:22.624061946 +0000 UTC m=+16.193754440" May 15 23:26:22.634910 systemd[1]: Started cri-containerd-381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5.scope - libcontainer container 381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5. May 15 23:26:22.662186 systemd[1]: cri-containerd-381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5.scope: Deactivated successfully. May 15 23:26:22.665619 containerd[1456]: time="2025-05-15T23:26:22.665578426Z" level=info msg="StartContainer for \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" returns successfully" May 15 23:26:22.668846 containerd[1456]: time="2025-05-15T23:26:22.668811724Z" level=info msg="received exit event container_id:\"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" id:\"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" pid:3172 exited_at:{seconds:1747351582 nanos:668498334}" May 15 23:26:22.668937 containerd[1456]: time="2025-05-15T23:26:22.668864282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" id:\"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" pid:3172 exited_at:{seconds:1747351582 nanos:668498334}" May 15 23:26:22.687786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5-rootfs.mount: Deactivated successfully. May 15 23:26:23.069653 update_engine[1444]: I20250515 23:26:23.069101 1444 update_attempter.cc:509] Updating boot flags... May 15 23:26:23.104056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3204) May 15 23:26:23.133724 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3207) May 15 23:26:23.587375 kubelet[2570]: E0515 23:26:23.587338 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:23.587748 kubelet[2570]: E0515 23:26:23.587429 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:23.589750 containerd[1456]: time="2025-05-15T23:26:23.589715983Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:26:23.603473 containerd[1456]: time="2025-05-15T23:26:23.603414769Z" level=info msg="Container 44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:23.611263 containerd[1456]: time="2025-05-15T23:26:23.611224893Z" level=info msg="CreateContainer within sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\"" May 15 23:26:23.611983 containerd[1456]: time="2025-05-15T23:26:23.611936831Z" level=info msg="StartContainer for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\"" May 15 23:26:23.612878 containerd[1456]: time="2025-05-15T23:26:23.612841204Z" level=info msg="connecting to shim 44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01" address="unix:///run/containerd/s/0bd756e1f945f10ce43aa7e3793e95576b36420ce232dc1793f40cf8785f2731" protocol=ttrpc version=3 May 15 23:26:23.640658 systemd[1]: Started cri-containerd-44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01.scope - libcontainer container 44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01. May 15 23:26:23.681466 containerd[1456]: time="2025-05-15T23:26:23.681424092Z" level=info msg="StartContainer for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" returns successfully" May 15 23:26:23.774436 containerd[1456]: time="2025-05-15T23:26:23.774391643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" id:\"ff47e6c291edf223c6096a8055fc3b8658726ebb2b012fe491c6f5fa22cb91cd\" pid:3254 exited_at:{seconds:1747351583 nanos:773682865}" May 15 23:26:23.787885 kubelet[2570]: I0515 23:26:23.786468 2570 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 23:26:23.835579 systemd[1]: Created slice kubepods-burstable-pod57c36847_1bae_4a47_b65c_2e963f90495f.slice - libcontainer container kubepods-burstable-pod57c36847_1bae_4a47_b65c_2e963f90495f.slice. May 15 23:26:23.839268 systemd[1]: Created slice kubepods-burstable-podbebb878e_aa58_4f18_a7b8_96a215a4b76f.slice - libcontainer container kubepods-burstable-podbebb878e_aa58_4f18_a7b8_96a215a4b76f.slice. May 15 23:26:24.012515 kubelet[2570]: I0515 23:26:24.012389 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bebb878e-aa58-4f18-a7b8-96a215a4b76f-config-volume\") pod \"coredns-7c65d6cfc9-fmbqn\" (UID: \"bebb878e-aa58-4f18-a7b8-96a215a4b76f\") " pod="kube-system/coredns-7c65d6cfc9-fmbqn" May 15 23:26:24.012870 kubelet[2570]: I0515 23:26:24.012732 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/57c36847-1bae-4a47-b65c-2e963f90495f-config-volume\") pod \"coredns-7c65d6cfc9-qgq95\" (UID: \"57c36847-1bae-4a47-b65c-2e963f90495f\") " pod="kube-system/coredns-7c65d6cfc9-qgq95" May 15 23:26:24.012870 kubelet[2570]: I0515 23:26:24.012768 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wthrv\" (UniqueName: \"kubernetes.io/projected/bebb878e-aa58-4f18-a7b8-96a215a4b76f-kube-api-access-wthrv\") pod \"coredns-7c65d6cfc9-fmbqn\" (UID: \"bebb878e-aa58-4f18-a7b8-96a215a4b76f\") " pod="kube-system/coredns-7c65d6cfc9-fmbqn" May 15 23:26:24.012870 kubelet[2570]: I0515 23:26:24.012821 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvzkt\" (UniqueName: \"kubernetes.io/projected/57c36847-1bae-4a47-b65c-2e963f90495f-kube-api-access-kvzkt\") pod \"coredns-7c65d6cfc9-qgq95\" (UID: \"57c36847-1bae-4a47-b65c-2e963f90495f\") " pod="kube-system/coredns-7c65d6cfc9-qgq95" May 15 23:26:24.139194 kubelet[2570]: E0515 23:26:24.139078 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:24.140212 containerd[1456]: time="2025-05-15T23:26:24.140171797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qgq95,Uid:57c36847-1bae-4a47-b65c-2e963f90495f,Namespace:kube-system,Attempt:0,}" May 15 23:26:24.144802 kubelet[2570]: E0515 23:26:24.144761 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:24.152995 containerd[1456]: time="2025-05-15T23:26:24.152429804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmbqn,Uid:bebb878e-aa58-4f18-a7b8-96a215a4b76f,Namespace:kube-system,Attempt:0,}" May 15 23:26:24.594242 kubelet[2570]: E0515 23:26:24.593521 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:24.608989 kubelet[2570]: I0515 23:26:24.608926 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b7fvm" podStartSLOduration=5.773616587 podStartE2EDuration="12.608910122s" podCreationTimestamp="2025-05-15 23:26:12 +0000 UTC" firstStartedPulling="2025-05-15 23:26:13.208541518 +0000 UTC m=+6.778234012" lastFinishedPulling="2025-05-15 23:26:20.043835013 +0000 UTC m=+13.613527547" observedRunningTime="2025-05-15 23:26:24.608244581 +0000 UTC m=+18.177937075" watchObservedRunningTime="2025-05-15 23:26:24.608910122 +0000 UTC m=+18.178602616" May 15 23:26:25.595508 kubelet[2570]: E0515 23:26:25.595453 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:25.906798 systemd-networkd[1389]: cilium_host: Link UP May 15 23:26:25.907743 systemd-networkd[1389]: cilium_net: Link UP May 15 23:26:25.908301 systemd-networkd[1389]: cilium_net: Gained carrier May 15 23:26:25.908567 systemd-networkd[1389]: cilium_host: Gained carrier May 15 23:26:25.991610 systemd-networkd[1389]: cilium_vxlan: Link UP May 15 23:26:25.991618 systemd-networkd[1389]: cilium_vxlan: Gained carrier May 15 23:26:26.238018 systemd-networkd[1389]: cilium_net: Gained IPv6LL May 15 23:26:26.303749 kernel: NET: Registered PF_ALG protocol family May 15 23:26:26.507873 systemd-networkd[1389]: cilium_host: Gained IPv6LL May 15 23:26:26.596875 kubelet[2570]: E0515 23:26:26.596831 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:26.889728 systemd-networkd[1389]: lxc_health: Link UP May 15 23:26:26.889990 systemd-networkd[1389]: lxc_health: Gained carrier May 15 23:26:27.148830 systemd-networkd[1389]: cilium_vxlan: Gained IPv6LL May 15 23:26:27.276732 kernel: eth0: renamed from tmpf6693 May 15 23:26:27.285750 kernel: eth0: renamed from tmpbdfd2 May 15 23:26:27.292028 systemd-networkd[1389]: lxc688464c28c69: Link UP May 15 23:26:27.293217 systemd-networkd[1389]: lxc522826108beb: Link UP May 15 23:26:27.296255 systemd-networkd[1389]: lxc688464c28c69: Gained carrier May 15 23:26:27.296852 systemd-networkd[1389]: lxc522826108beb: Gained carrier May 15 23:26:27.598333 kubelet[2570]: E0515 23:26:27.598068 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:28.107881 systemd-networkd[1389]: lxc_health: Gained IPv6LL May 15 23:26:28.747845 systemd-networkd[1389]: lxc522826108beb: Gained IPv6LL May 15 23:26:28.762826 kubelet[2570]: I0515 23:26:28.762384 2570 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 23:26:28.764670 kubelet[2570]: E0515 23:26:28.763008 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:29.259846 systemd-networkd[1389]: lxc688464c28c69: Gained IPv6LL May 15 23:26:29.601707 kubelet[2570]: E0515 23:26:29.601581 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:30.751896 containerd[1456]: time="2025-05-15T23:26:30.751850583Z" level=info msg="connecting to shim bdfd299007e109224b39c8f97f0d5bf78cd7c18d4494f1e5e5d119e41ba15c6c" address="unix:///run/containerd/s/b276264b1a961fd7c536a0228039048ab615434546fa0efd8c1d64953ddb74a9" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:30.752220 containerd[1456]: time="2025-05-15T23:26:30.751862422Z" level=info msg="connecting to shim f6693c058aa3159861ecec0b3eb6fa15beffcb2193a40bf9acc3e4e49669dc7d" address="unix:///run/containerd/s/577404c4f55d4cf6468a89bbb4d115069c01c76d76310b37e1ef3035d8963ea4" namespace=k8s.io protocol=ttrpc version=3 May 15 23:26:30.774831 systemd[1]: Started cri-containerd-bdfd299007e109224b39c8f97f0d5bf78cd7c18d4494f1e5e5d119e41ba15c6c.scope - libcontainer container bdfd299007e109224b39c8f97f0d5bf78cd7c18d4494f1e5e5d119e41ba15c6c. May 15 23:26:30.777091 systemd[1]: Started cri-containerd-f6693c058aa3159861ecec0b3eb6fa15beffcb2193a40bf9acc3e4e49669dc7d.scope - libcontainer container f6693c058aa3159861ecec0b3eb6fa15beffcb2193a40bf9acc3e4e49669dc7d. May 15 23:26:30.787640 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:26:30.788548 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:26:30.813189 containerd[1456]: time="2025-05-15T23:26:30.813148493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fmbqn,Uid:bebb878e-aa58-4f18-a7b8-96a215a4b76f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfd299007e109224b39c8f97f0d5bf78cd7c18d4494f1e5e5d119e41ba15c6c\"" May 15 23:26:30.814132 kubelet[2570]: E0515 23:26:30.814109 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:30.814472 containerd[1456]: time="2025-05-15T23:26:30.814319867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qgq95,Uid:57c36847-1bae-4a47-b65c-2e963f90495f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6693c058aa3159861ecec0b3eb6fa15beffcb2193a40bf9acc3e4e49669dc7d\"" May 15 23:26:30.815640 kubelet[2570]: E0515 23:26:30.815616 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:30.817331 containerd[1456]: time="2025-05-15T23:26:30.817297843Z" level=info msg="CreateContainer within sandbox \"bdfd299007e109224b39c8f97f0d5bf78cd7c18d4494f1e5e5d119e41ba15c6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:26:30.819128 containerd[1456]: time="2025-05-15T23:26:30.819045725Z" level=info msg="CreateContainer within sandbox \"f6693c058aa3159861ecec0b3eb6fa15beffcb2193a40bf9acc3e4e49669dc7d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:26:30.827279 containerd[1456]: time="2025-05-15T23:26:30.826203290Z" level=info msg="Container 517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:30.830898 containerd[1456]: time="2025-05-15T23:26:30.830534956Z" level=info msg="Container 7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb: CDI devices from CRI Config.CDIDevices: []" May 15 23:26:30.832354 containerd[1456]: time="2025-05-15T23:26:30.832312637Z" level=info msg="CreateContainer within sandbox \"f6693c058aa3159861ecec0b3eb6fa15beffcb2193a40bf9acc3e4e49669dc7d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b\"" May 15 23:26:30.832991 containerd[1456]: time="2025-05-15T23:26:30.832769707Z" level=info msg="StartContainer for \"517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b\"" May 15 23:26:30.834353 containerd[1456]: time="2025-05-15T23:26:30.834317034Z" level=info msg="connecting to shim 517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b" address="unix:///run/containerd/s/577404c4f55d4cf6468a89bbb4d115069c01c76d76310b37e1ef3035d8963ea4" protocol=ttrpc version=3 May 15 23:26:30.837007 containerd[1456]: time="2025-05-15T23:26:30.836968776Z" level=info msg="CreateContainer within sandbox \"bdfd299007e109224b39c8f97f0d5bf78cd7c18d4494f1e5e5d119e41ba15c6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb\"" May 15 23:26:30.838902 containerd[1456]: time="2025-05-15T23:26:30.837641481Z" level=info msg="StartContainer for \"7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb\"" May 15 23:26:30.844004 containerd[1456]: time="2025-05-15T23:26:30.843972184Z" level=info msg="connecting to shim 7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb" address="unix:///run/containerd/s/b276264b1a961fd7c536a0228039048ab615434546fa0efd8c1d64953ddb74a9" protocol=ttrpc version=3 May 15 23:26:30.853839 systemd[1]: Started cri-containerd-517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b.scope - libcontainer container 517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b. May 15 23:26:30.857097 systemd[1]: Started cri-containerd-7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb.scope - libcontainer container 7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb. May 15 23:26:30.901094 containerd[1456]: time="2025-05-15T23:26:30.898270726Z" level=info msg="StartContainer for \"517939bf0d37bbf071cfcd557732b7d5ced67352ef168e8e28733624f28c862b\" returns successfully" May 15 23:26:30.901094 containerd[1456]: time="2025-05-15T23:26:30.898664118Z" level=info msg="StartContainer for \"7c2b97429c4b77806c2b34142bfac5c5ba87a1b4c874a8f45a032d52ce3396bb\" returns successfully" May 15 23:26:31.607255 kubelet[2570]: E0515 23:26:31.606780 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:31.610434 kubelet[2570]: E0515 23:26:31.610339 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:31.620361 kubelet[2570]: I0515 23:26:31.620295 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fmbqn" podStartSLOduration=19.620155162 podStartE2EDuration="19.620155162s" podCreationTimestamp="2025-05-15 23:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:26:31.619271341 +0000 UTC m=+25.188963835" watchObservedRunningTime="2025-05-15 23:26:31.620155162 +0000 UTC m=+25.189847656" May 15 23:26:31.630908 kubelet[2570]: I0515 23:26:31.629922 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qgq95" podStartSLOduration=19.62989492 podStartE2EDuration="19.62989492s" podCreationTimestamp="2025-05-15 23:26:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:26:31.62943925 +0000 UTC m=+25.199131744" watchObservedRunningTime="2025-05-15 23:26:31.62989492 +0000 UTC m=+25.199587414" May 15 23:26:32.612637 kubelet[2570]: E0515 23:26:32.612607 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:32.613178 kubelet[2570]: E0515 23:26:32.612661 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:32.765279 systemd[1]: Started sshd@7-10.0.0.41:22-10.0.0.1:54160.service - OpenSSH per-connection server daemon (10.0.0.1:54160). May 15 23:26:32.821889 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 54160 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:32.825159 sshd-session[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:32.829451 systemd-logind[1440]: New session 8 of user core. May 15 23:26:32.837863 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:26:32.963634 sshd[3913]: Connection closed by 10.0.0.1 port 54160 May 15 23:26:32.964155 sshd-session[3911]: pam_unix(sshd:session): session closed for user core May 15 23:26:32.967660 systemd[1]: sshd@7-10.0.0.41:22-10.0.0.1:54160.service: Deactivated successfully. May 15 23:26:32.969382 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:26:32.970814 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. May 15 23:26:32.972135 systemd-logind[1440]: Removed session 8. May 15 23:26:33.614458 kubelet[2570]: E0515 23:26:33.614374 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:33.614458 kubelet[2570]: E0515 23:26:33.614432 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:26:37.975202 systemd[1]: Started sshd@8-10.0.0.41:22-10.0.0.1:54174.service - OpenSSH per-connection server daemon (10.0.0.1:54174). May 15 23:26:38.022457 sshd[3929]: Accepted publickey for core from 10.0.0.1 port 54174 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:38.023839 sshd-session[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:38.027764 systemd-logind[1440]: New session 9 of user core. May 15 23:26:38.036837 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:26:38.146303 sshd[3931]: Connection closed by 10.0.0.1 port 54174 May 15 23:26:38.146997 sshd-session[3929]: pam_unix(sshd:session): session closed for user core May 15 23:26:38.149600 systemd[1]: sshd@8-10.0.0.41:22-10.0.0.1:54174.service: Deactivated successfully. May 15 23:26:38.151290 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:26:38.152456 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. May 15 23:26:38.155306 systemd-logind[1440]: Removed session 9. May 15 23:26:43.159350 systemd[1]: Started sshd@9-10.0.0.41:22-10.0.0.1:56288.service - OpenSSH per-connection server daemon (10.0.0.1:56288). May 15 23:26:43.216334 sshd[3945]: Accepted publickey for core from 10.0.0.1 port 56288 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:43.218403 sshd-session[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:43.223229 systemd-logind[1440]: New session 10 of user core. May 15 23:26:43.231921 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:26:43.343621 sshd[3948]: Connection closed by 10.0.0.1 port 56288 May 15 23:26:43.343888 sshd-session[3945]: pam_unix(sshd:session): session closed for user core May 15 23:26:43.347169 systemd[1]: sshd@9-10.0.0.41:22-10.0.0.1:56288.service: Deactivated successfully. May 15 23:26:43.349492 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:26:43.351002 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. May 15 23:26:43.351850 systemd-logind[1440]: Removed session 10. May 15 23:26:48.358038 systemd[1]: Started sshd@10-10.0.0.41:22-10.0.0.1:56290.service - OpenSSH per-connection server daemon (10.0.0.1:56290). May 15 23:26:48.406681 sshd[3964]: Accepted publickey for core from 10.0.0.1 port 56290 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:48.407957 sshd-session[3964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:48.412319 systemd-logind[1440]: New session 11 of user core. May 15 23:26:48.426847 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:26:48.543764 sshd[3966]: Connection closed by 10.0.0.1 port 56290 May 15 23:26:48.544713 sshd-session[3964]: pam_unix(sshd:session): session closed for user core May 15 23:26:48.554215 systemd[1]: sshd@10-10.0.0.41:22-10.0.0.1:56290.service: Deactivated successfully. May 15 23:26:48.555734 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:26:48.556367 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. May 15 23:26:48.558751 systemd[1]: Started sshd@11-10.0.0.41:22-10.0.0.1:56298.service - OpenSSH per-connection server daemon (10.0.0.1:56298). May 15 23:26:48.561937 systemd-logind[1440]: Removed session 11. May 15 23:26:48.620032 sshd[3980]: Accepted publickey for core from 10.0.0.1 port 56298 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:48.622876 sshd-session[3980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:48.630541 systemd-logind[1440]: New session 12 of user core. May 15 23:26:48.639896 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:26:48.791939 sshd[3983]: Connection closed by 10.0.0.1 port 56298 May 15 23:26:48.792477 sshd-session[3980]: pam_unix(sshd:session): session closed for user core May 15 23:26:48.806114 systemd[1]: sshd@11-10.0.0.41:22-10.0.0.1:56298.service: Deactivated successfully. May 15 23:26:48.807859 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:26:48.811364 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. May 15 23:26:48.813884 systemd[1]: Started sshd@12-10.0.0.41:22-10.0.0.1:56302.service - OpenSSH per-connection server daemon (10.0.0.1:56302). May 15 23:26:48.816448 systemd-logind[1440]: Removed session 12. May 15 23:26:48.866315 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 56302 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:48.867548 sshd-session[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:48.872409 systemd-logind[1440]: New session 13 of user core. May 15 23:26:48.884847 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:26:49.003446 sshd[3997]: Connection closed by 10.0.0.1 port 56302 May 15 23:26:49.004162 sshd-session[3994]: pam_unix(sshd:session): session closed for user core May 15 23:26:49.008098 systemd[1]: sshd@12-10.0.0.41:22-10.0.0.1:56302.service: Deactivated successfully. May 15 23:26:49.010817 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:26:49.011992 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. May 15 23:26:49.012938 systemd-logind[1440]: Removed session 13. May 15 23:26:54.015256 systemd[1]: Started sshd@13-10.0.0.41:22-10.0.0.1:54330.service - OpenSSH per-connection server daemon (10.0.0.1:54330). May 15 23:26:54.068938 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 54330 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:54.070073 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:54.074211 systemd-logind[1440]: New session 14 of user core. May 15 23:26:54.088843 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:26:54.193563 sshd[4014]: Connection closed by 10.0.0.1 port 54330 May 15 23:26:54.194104 sshd-session[4012]: pam_unix(sshd:session): session closed for user core May 15 23:26:54.197488 systemd[1]: sshd@13-10.0.0.41:22-10.0.0.1:54330.service: Deactivated successfully. May 15 23:26:54.199304 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:26:54.199972 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. May 15 23:26:54.200790 systemd-logind[1440]: Removed session 14. May 15 23:26:59.213941 systemd[1]: Started sshd@14-10.0.0.41:22-10.0.0.1:54344.service - OpenSSH per-connection server daemon (10.0.0.1:54344). May 15 23:26:59.252572 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 54344 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:59.253682 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:59.257282 systemd-logind[1440]: New session 15 of user core. May 15 23:26:59.267980 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:26:59.374073 sshd[4029]: Connection closed by 10.0.0.1 port 54344 May 15 23:26:59.374411 sshd-session[4027]: pam_unix(sshd:session): session closed for user core May 15 23:26:59.387943 systemd[1]: sshd@14-10.0.0.41:22-10.0.0.1:54344.service: Deactivated successfully. May 15 23:26:59.389580 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:26:59.390830 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. May 15 23:26:59.392014 systemd[1]: Started sshd@15-10.0.0.41:22-10.0.0.1:54346.service - OpenSSH per-connection server daemon (10.0.0.1:54346). May 15 23:26:59.392778 systemd-logind[1440]: Removed session 15. May 15 23:26:59.443998 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 54346 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:59.445156 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:59.449663 systemd-logind[1440]: New session 16 of user core. May 15 23:26:59.461900 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:26:59.668375 sshd[4044]: Connection closed by 10.0.0.1 port 54346 May 15 23:26:59.669111 sshd-session[4041]: pam_unix(sshd:session): session closed for user core May 15 23:26:59.682197 systemd[1]: sshd@15-10.0.0.41:22-10.0.0.1:54346.service: Deactivated successfully. May 15 23:26:59.683841 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:26:59.685532 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. May 15 23:26:59.686859 systemd[1]: Started sshd@16-10.0.0.41:22-10.0.0.1:54358.service - OpenSSH per-connection server daemon (10.0.0.1:54358). May 15 23:26:59.688431 systemd-logind[1440]: Removed session 16. May 15 23:26:59.736221 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 54358 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:26:59.737625 sshd-session[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:26:59.742261 systemd-logind[1440]: New session 17 of user core. May 15 23:26:59.749850 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:27:01.000253 sshd[4058]: Connection closed by 10.0.0.1 port 54358 May 15 23:27:01.000737 sshd-session[4055]: pam_unix(sshd:session): session closed for user core May 15 23:27:01.011868 systemd[1]: sshd@16-10.0.0.41:22-10.0.0.1:54358.service: Deactivated successfully. May 15 23:27:01.016981 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:27:01.018883 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. May 15 23:27:01.022875 systemd[1]: Started sshd@17-10.0.0.41:22-10.0.0.1:54366.service - OpenSSH per-connection server daemon (10.0.0.1:54366). May 15 23:27:01.026678 systemd-logind[1440]: Removed session 17. May 15 23:27:01.070242 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 54366 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:01.071315 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:01.075752 systemd-logind[1440]: New session 18 of user core. May 15 23:27:01.090825 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:27:01.300794 sshd[4083]: Connection closed by 10.0.0.1 port 54366 May 15 23:27:01.301565 sshd-session[4080]: pam_unix(sshd:session): session closed for user core May 15 23:27:01.312497 systemd[1]: sshd@17-10.0.0.41:22-10.0.0.1:54366.service: Deactivated successfully. May 15 23:27:01.314402 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:27:01.315360 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. May 15 23:27:01.318120 systemd[1]: Started sshd@18-10.0.0.41:22-10.0.0.1:54382.service - OpenSSH per-connection server daemon (10.0.0.1:54382). May 15 23:27:01.319004 systemd-logind[1440]: Removed session 18. May 15 23:27:01.365139 sshd[4094]: Accepted publickey for core from 10.0.0.1 port 54382 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:01.366329 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:01.372796 systemd-logind[1440]: New session 19 of user core. May 15 23:27:01.378825 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:27:01.481819 sshd[4097]: Connection closed by 10.0.0.1 port 54382 May 15 23:27:01.482201 sshd-session[4094]: pam_unix(sshd:session): session closed for user core May 15 23:27:01.485651 systemd[1]: sshd@18-10.0.0.41:22-10.0.0.1:54382.service: Deactivated successfully. May 15 23:27:01.488317 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:27:01.489544 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. May 15 23:27:01.490388 systemd-logind[1440]: Removed session 19. May 15 23:27:06.497715 systemd[1]: Started sshd@19-10.0.0.41:22-10.0.0.1:45922.service - OpenSSH per-connection server daemon (10.0.0.1:45922). May 15 23:27:06.543806 sshd[4113]: Accepted publickey for core from 10.0.0.1 port 45922 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:06.544941 sshd-session[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:06.548570 systemd-logind[1440]: New session 20 of user core. May 15 23:27:06.554842 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:27:06.660714 sshd[4117]: Connection closed by 10.0.0.1 port 45922 May 15 23:27:06.661015 sshd-session[4113]: pam_unix(sshd:session): session closed for user core May 15 23:27:06.664720 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. May 15 23:27:06.665048 systemd[1]: sshd@19-10.0.0.41:22-10.0.0.1:45922.service: Deactivated successfully. May 15 23:27:06.667607 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:27:06.669319 systemd-logind[1440]: Removed session 20. May 15 23:27:11.672011 systemd[1]: Started sshd@20-10.0.0.41:22-10.0.0.1:45936.service - OpenSSH per-connection server daemon (10.0.0.1:45936). May 15 23:27:11.718722 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 45936 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:11.719904 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:11.723890 systemd-logind[1440]: New session 21 of user core. May 15 23:27:11.731823 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:27:11.836481 sshd[4132]: Connection closed by 10.0.0.1 port 45936 May 15 23:27:11.836837 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 15 23:27:11.839854 systemd[1]: sshd@20-10.0.0.41:22-10.0.0.1:45936.service: Deactivated successfully. May 15 23:27:11.841403 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:27:11.842549 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. May 15 23:27:11.843448 systemd-logind[1440]: Removed session 21. May 15 23:27:16.847864 systemd[1]: Started sshd@21-10.0.0.41:22-10.0.0.1:46786.service - OpenSSH per-connection server daemon (10.0.0.1:46786). May 15 23:27:16.898018 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 46786 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:16.899145 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:16.902702 systemd-logind[1440]: New session 22 of user core. May 15 23:27:16.912813 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 23:27:17.018289 sshd[4149]: Connection closed by 10.0.0.1 port 46786 May 15 23:27:17.018676 sshd-session[4147]: pam_unix(sshd:session): session closed for user core May 15 23:27:17.032839 systemd[1]: sshd@21-10.0.0.41:22-10.0.0.1:46786.service: Deactivated successfully. May 15 23:27:17.034318 systemd[1]: session-22.scope: Deactivated successfully. May 15 23:27:17.035467 systemd-logind[1440]: Session 22 logged out. Waiting for processes to exit. May 15 23:27:17.036650 systemd[1]: Started sshd@22-10.0.0.41:22-10.0.0.1:46792.service - OpenSSH per-connection server daemon (10.0.0.1:46792). May 15 23:27:17.037474 systemd-logind[1440]: Removed session 22. May 15 23:27:17.084213 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 46792 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:17.085312 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:17.089351 systemd-logind[1440]: New session 23 of user core. May 15 23:27:17.092896 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 23:27:19.242623 containerd[1456]: time="2025-05-15T23:27:19.242101503Z" level=info msg="StopContainer for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" with timeout 30 (s)" May 15 23:27:19.266824 containerd[1456]: time="2025-05-15T23:27:19.266777613Z" level=info msg="Stop container \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" with signal terminated" May 15 23:27:19.276719 systemd[1]: cri-containerd-03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389.scope: Deactivated successfully. May 15 23:27:19.279091 containerd[1456]: time="2025-05-15T23:27:19.278780522Z" level=info msg="received exit event container_id:\"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" id:\"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" pid:3138 exited_at:{seconds:1747351639 nanos:277639304}" May 15 23:27:19.279091 containerd[1456]: time="2025-05-15T23:27:19.278812603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" id:\"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" pid:3138 exited_at:{seconds:1747351639 nanos:277639304}" May 15 23:27:19.288499 containerd[1456]: time="2025-05-15T23:27:19.288466555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" id:\"96012c7d6cde989b6711cd162939ff165c081b71ed12e6cadaef75c7baa17dcc\" pid:4188 exited_at:{seconds:1747351639 nanos:288264432}" May 15 23:27:19.290567 containerd[1456]: time="2025-05-15T23:27:19.290531468Z" level=info msg="StopContainer for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" with timeout 2 (s)" May 15 23:27:19.290936 containerd[1456]: time="2025-05-15T23:27:19.290914554Z" level=info msg="Stop container \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" with signal terminated" May 15 23:27:19.293761 containerd[1456]: time="2025-05-15T23:27:19.293709558Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:27:19.302232 systemd-networkd[1389]: lxc_health: Link DOWN May 15 23:27:19.302236 systemd-networkd[1389]: lxc_health: Lost carrier May 15 23:27:19.306433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389-rootfs.mount: Deactivated successfully. May 15 23:27:19.316821 containerd[1456]: time="2025-05-15T23:27:19.316778162Z" level=info msg="StopContainer for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" returns successfully" May 15 23:27:19.317419 containerd[1456]: time="2025-05-15T23:27:19.317396052Z" level=info msg="StopPodSandbox for \"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\"" May 15 23:27:19.317467 containerd[1456]: time="2025-05-15T23:27:19.317452213Z" level=info msg="Container to stop \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:27:19.320337 systemd[1]: cri-containerd-44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01.scope: Deactivated successfully. May 15 23:27:19.320634 systemd[1]: cri-containerd-44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01.scope: Consumed 6.378s CPU time, 124.1M memory peak, 136K read from disk, 12.9M written to disk. May 15 23:27:19.321755 containerd[1456]: time="2025-05-15T23:27:19.321395475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" id:\"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" pid:3223 exited_at:{seconds:1747351639 nanos:321096191}" May 15 23:27:19.321755 containerd[1456]: time="2025-05-15T23:27:19.321472117Z" level=info msg="received exit event container_id:\"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" id:\"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" pid:3223 exited_at:{seconds:1747351639 nanos:321096191}" May 15 23:27:19.323407 systemd[1]: cri-containerd-fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7.scope: Deactivated successfully. May 15 23:27:19.325908 containerd[1456]: time="2025-05-15T23:27:19.325174455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" id:\"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" pid:2789 exit_status:137 exited_at:{seconds:1747351639 nanos:324885250}" May 15 23:27:19.340392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01-rootfs.mount: Deactivated successfully. May 15 23:27:19.346994 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7-rootfs.mount: Deactivated successfully. May 15 23:27:19.348362 containerd[1456]: time="2025-05-15T23:27:19.348331221Z" level=info msg="StopContainer for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" returns successfully" May 15 23:27:19.349466 containerd[1456]: time="2025-05-15T23:27:19.349443238Z" level=info msg="shim disconnected" id=fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7 namespace=k8s.io May 15 23:27:19.349504 containerd[1456]: time="2025-05-15T23:27:19.349469919Z" level=warning msg="cleaning up after shim disconnected" id=fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7 namespace=k8s.io May 15 23:27:19.349504 containerd[1456]: time="2025-05-15T23:27:19.349497959Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:27:19.350942 containerd[1456]: time="2025-05-15T23:27:19.350398893Z" level=info msg="StopPodSandbox for \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\"" May 15 23:27:19.350942 containerd[1456]: time="2025-05-15T23:27:19.350474055Z" level=info msg="Container to stop \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:27:19.350942 containerd[1456]: time="2025-05-15T23:27:19.350487935Z" level=info msg="Container to stop \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:27:19.350942 containerd[1456]: time="2025-05-15T23:27:19.350496735Z" level=info msg="Container to stop \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:27:19.350942 containerd[1456]: time="2025-05-15T23:27:19.350504495Z" level=info msg="Container to stop \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:27:19.350942 containerd[1456]: time="2025-05-15T23:27:19.350512055Z" level=info msg="Container to stop \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:27:19.356354 systemd[1]: cri-containerd-c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721.scope: Deactivated successfully. May 15 23:27:19.363206 containerd[1456]: time="2025-05-15T23:27:19.363168295Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" id:\"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" pid:2727 exit_status:137 exited_at:{seconds:1747351639 nanos:356969237}" May 15 23:27:19.364913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7-shm.mount: Deactivated successfully. May 15 23:27:19.365436 containerd[1456]: time="2025-05-15T23:27:19.365243408Z" level=info msg="received exit event sandbox_id:\"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" exit_status:137 exited_at:{seconds:1747351639 nanos:324885250}" May 15 23:27:19.368951 containerd[1456]: time="2025-05-15T23:27:19.368920466Z" level=info msg="TearDown network for sandbox \"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" successfully" May 15 23:27:19.369058 containerd[1456]: time="2025-05-15T23:27:19.369043948Z" level=info msg="StopPodSandbox for \"fdd42d723b6e875b8eb2193dc4f0da410470dd09c2f4c6b74cf71302573bacb7\" returns successfully" May 15 23:27:19.374583 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721-rootfs.mount: Deactivated successfully. May 15 23:27:19.379127 containerd[1456]: time="2025-05-15T23:27:19.378949944Z" level=info msg="received exit event sandbox_id:\"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" exit_status:137 exited_at:{seconds:1747351639 nanos:356969237}" May 15 23:27:19.379256 containerd[1456]: time="2025-05-15T23:27:19.379147947Z" level=info msg="TearDown network for sandbox \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" successfully" May 15 23:27:19.379256 containerd[1456]: time="2025-05-15T23:27:19.379169348Z" level=info msg="StopPodSandbox for \"c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721\" returns successfully" May 15 23:27:19.379256 containerd[1456]: time="2025-05-15T23:27:19.379198788Z" level=info msg="shim disconnected" id=c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721 namespace=k8s.io May 15 23:27:19.379321 containerd[1456]: time="2025-05-15T23:27:19.379236549Z" level=warning msg="cleaning up after shim disconnected" id=c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721 namespace=k8s.io May 15 23:27:19.379321 containerd[1456]: time="2025-05-15T23:27:19.379265989Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:27:19.533851 kubelet[2570]: I0515 23:27:19.533612 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cni-path\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.533851 kubelet[2570]: I0515 23:27:19.533657 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rtqb\" (UniqueName: \"kubernetes.io/projected/1aee7d82-e01a-4611-ad4e-cff9df216cbe-kube-api-access-6rtqb\") pod \"1aee7d82-e01a-4611-ad4e-cff9df216cbe\" (UID: \"1aee7d82-e01a-4611-ad4e-cff9df216cbe\") " May 15 23:27:19.533851 kubelet[2570]: I0515 23:27:19.533678 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-bpf-maps\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.533851 kubelet[2570]: I0515 23:27:19.533707 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-run\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.533851 kubelet[2570]: I0515 23:27:19.533723 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-kernel\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.533851 kubelet[2570]: I0515 23:27:19.533739 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aee7d82-e01a-4611-ad4e-cff9df216cbe-cilium-config-path\") pod \"1aee7d82-e01a-4611-ad4e-cff9df216cbe\" (UID: \"1aee7d82-e01a-4611-ad4e-cff9df216cbe\") " May 15 23:27:19.534340 kubelet[2570]: I0515 23:27:19.533760 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-lib-modules\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534340 kubelet[2570]: I0515 23:27:19.533776 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-xtables-lock\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534340 kubelet[2570]: I0515 23:27:19.533792 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6khb\" (UniqueName: \"kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-kube-api-access-s6khb\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534340 kubelet[2570]: I0515 23:27:19.533807 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-net\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534340 kubelet[2570]: I0515 23:27:19.533821 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-etc-cni-netd\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534340 kubelet[2570]: I0515 23:27:19.533845 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-config-path\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534464 kubelet[2570]: I0515 23:27:19.533877 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2562d114-4fd6-4bb9-8af3-2b847b20d342-clustermesh-secrets\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534464 kubelet[2570]: I0515 23:27:19.533897 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-hubble-tls\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534464 kubelet[2570]: I0515 23:27:19.533910 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-hostproc\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.534464 kubelet[2570]: I0515 23:27:19.533923 2570 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-cgroup\") pod \"2562d114-4fd6-4bb9-8af3-2b847b20d342\" (UID: \"2562d114-4fd6-4bb9-8af3-2b847b20d342\") " May 15 23:27:19.536923 kubelet[2570]: I0515 23:27:19.536344 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.536923 kubelet[2570]: I0515 23:27:19.536390 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.536923 kubelet[2570]: I0515 23:27:19.536563 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.536923 kubelet[2570]: I0515 23:27:19.536599 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.536923 kubelet[2570]: I0515 23:27:19.536627 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.537094 kubelet[2570]: I0515 23:27:19.536642 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.537094 kubelet[2570]: I0515 23:27:19.536657 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.537498 kubelet[2570]: I0515 23:27:19.537472 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cni-path" (OuterVolumeSpecName: "cni-path") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.538181 kubelet[2570]: I0515 23:27:19.538146 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aee7d82-e01a-4611-ad4e-cff9df216cbe-kube-api-access-6rtqb" (OuterVolumeSpecName: "kube-api-access-6rtqb") pod "1aee7d82-e01a-4611-ad4e-cff9df216cbe" (UID: "1aee7d82-e01a-4611-ad4e-cff9df216cbe"). InnerVolumeSpecName "kube-api-access-6rtqb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:27:19.538252 kubelet[2570]: I0515 23:27:19.538181 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aee7d82-e01a-4611-ad4e-cff9df216cbe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1aee7d82-e01a-4611-ad4e-cff9df216cbe" (UID: "1aee7d82-e01a-4611-ad4e-cff9df216cbe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:27:19.538252 kubelet[2570]: I0515 23:27:19.538220 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-hostproc" (OuterVolumeSpecName: "hostproc") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.538252 kubelet[2570]: I0515 23:27:19.538237 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:27:19.539015 kubelet[2570]: I0515 23:27:19.538977 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-kube-api-access-s6khb" (OuterVolumeSpecName: "kube-api-access-s6khb") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "kube-api-access-s6khb". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:27:19.539987 kubelet[2570]: I0515 23:27:19.539933 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:27:19.539987 kubelet[2570]: I0515 23:27:19.539962 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:27:19.543515 kubelet[2570]: I0515 23:27:19.543485 2570 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2562d114-4fd6-4bb9-8af3-2b847b20d342-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2562d114-4fd6-4bb9-8af3-2b847b20d342" (UID: "2562d114-4fd6-4bb9-8af3-2b847b20d342"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 23:27:19.634993 kubelet[2570]: I0515 23:27:19.634946 2570 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.634993 kubelet[2570]: I0515 23:27:19.634977 2570 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.634993 kubelet[2570]: I0515 23:27:19.634987 2570 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6khb\" (UniqueName: \"kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-kube-api-access-s6khb\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.634993 kubelet[2570]: I0515 23:27:19.634999 2570 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.634993 kubelet[2570]: I0515 23:27:19.635009 2570 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635018 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635026 2570 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2562d114-4fd6-4bb9-8af3-2b847b20d342-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635034 2570 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2562d114-4fd6-4bb9-8af3-2b847b20d342-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635042 2570 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635050 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635057 2570 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635066 2570 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rtqb\" (UniqueName: \"kubernetes.io/projected/1aee7d82-e01a-4611-ad4e-cff9df216cbe-kube-api-access-6rtqb\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635208 kubelet[2570]: I0515 23:27:19.635074 2570 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635384 kubelet[2570]: I0515 23:27:19.635081 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635384 kubelet[2570]: I0515 23:27:19.635089 2570 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2562d114-4fd6-4bb9-8af3-2b847b20d342-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.635384 kubelet[2570]: I0515 23:27:19.635097 2570 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1aee7d82-e01a-4611-ad4e-cff9df216cbe-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:27:19.705841 kubelet[2570]: I0515 23:27:19.705815 2570 scope.go:117] "RemoveContainer" containerID="03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389" May 15 23:27:19.707979 containerd[1456]: time="2025-05-15T23:27:19.707941140Z" level=info msg="RemoveContainer for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\"" May 15 23:27:19.711425 systemd[1]: Removed slice kubepods-besteffort-pod1aee7d82_e01a_4611_ad4e_cff9df216cbe.slice - libcontainer container kubepods-besteffort-pod1aee7d82_e01a_4611_ad4e_cff9df216cbe.slice. May 15 23:27:19.715298 containerd[1456]: time="2025-05-15T23:27:19.715256576Z" level=info msg="RemoveContainer for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" returns successfully" May 15 23:27:19.716187 kubelet[2570]: I0515 23:27:19.716150 2570 scope.go:117] "RemoveContainer" containerID="03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389" May 15 23:27:19.717146 containerd[1456]: time="2025-05-15T23:27:19.717101645Z" level=error msg="ContainerStatus for \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\": not found" May 15 23:27:19.732957 kubelet[2570]: E0515 23:27:19.732904 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\": not found" containerID="03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389" May 15 23:27:19.733446 kubelet[2570]: I0515 23:27:19.732967 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389"} err="failed to get container status \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\": rpc error: code = NotFound desc = an error occurred when try to find container \"03f8f2ed5fbcd2f0a7ed1203575e0f604924f00e002cb90381c8b64a0af44389\": not found" May 15 23:27:19.733506 kubelet[2570]: I0515 23:27:19.733452 2570 scope.go:117] "RemoveContainer" containerID="44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01" May 15 23:27:19.736305 containerd[1456]: time="2025-05-15T23:27:19.736266148Z" level=info msg="RemoveContainer for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\"" May 15 23:27:19.737049 systemd[1]: Removed slice kubepods-burstable-pod2562d114_4fd6_4bb9_8af3_2b847b20d342.slice - libcontainer container kubepods-burstable-pod2562d114_4fd6_4bb9_8af3_2b847b20d342.slice. May 15 23:27:19.737144 systemd[1]: kubepods-burstable-pod2562d114_4fd6_4bb9_8af3_2b847b20d342.slice: Consumed 6.517s CPU time, 124.4M memory peak, 152K read from disk, 12.9M written to disk. May 15 23:27:19.741604 containerd[1456]: time="2025-05-15T23:27:19.741546351Z" level=info msg="RemoveContainer for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" returns successfully" May 15 23:27:19.741875 kubelet[2570]: I0515 23:27:19.741803 2570 scope.go:117] "RemoveContainer" containerID="381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5" May 15 23:27:19.749124 containerd[1456]: time="2025-05-15T23:27:19.749091510Z" level=info msg="RemoveContainer for \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\"" May 15 23:27:19.753516 containerd[1456]: time="2025-05-15T23:27:19.753476859Z" level=info msg="RemoveContainer for \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" returns successfully" May 15 23:27:19.753723 kubelet[2570]: I0515 23:27:19.753697 2570 scope.go:117] "RemoveContainer" containerID="8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916" May 15 23:27:19.755835 containerd[1456]: time="2025-05-15T23:27:19.755785416Z" level=info msg="RemoveContainer for \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\"" May 15 23:27:19.759214 containerd[1456]: time="2025-05-15T23:27:19.759181389Z" level=info msg="RemoveContainer for \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" returns successfully" May 15 23:27:19.759407 kubelet[2570]: I0515 23:27:19.759377 2570 scope.go:117] "RemoveContainer" containerID="5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299" May 15 23:27:19.760873 containerd[1456]: time="2025-05-15T23:27:19.760842136Z" level=info msg="RemoveContainer for \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\"" May 15 23:27:19.763757 containerd[1456]: time="2025-05-15T23:27:19.763728141Z" level=info msg="RemoveContainer for \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" returns successfully" May 15 23:27:19.764035 kubelet[2570]: I0515 23:27:19.763997 2570 scope.go:117] "RemoveContainer" containerID="73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df" May 15 23:27:19.765479 containerd[1456]: time="2025-05-15T23:27:19.765448168Z" level=info msg="RemoveContainer for \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\"" May 15 23:27:19.771357 containerd[1456]: time="2025-05-15T23:27:19.771321581Z" level=info msg="RemoveContainer for \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" returns successfully" May 15 23:27:19.771519 kubelet[2570]: I0515 23:27:19.771502 2570 scope.go:117] "RemoveContainer" containerID="44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01" May 15 23:27:19.771899 containerd[1456]: time="2025-05-15T23:27:19.771806149Z" level=error msg="ContainerStatus for \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\": not found" May 15 23:27:19.771986 kubelet[2570]: E0515 23:27:19.771956 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\": not found" containerID="44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01" May 15 23:27:19.772018 kubelet[2570]: I0515 23:27:19.771981 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01"} err="failed to get container status \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\": rpc error: code = NotFound desc = an error occurred when try to find container \"44d4f78789611d67b3bd3d24b72e6bd96cfc8d8f27cd8494372db40ecf36be01\": not found" May 15 23:27:19.772018 kubelet[2570]: I0515 23:27:19.772000 2570 scope.go:117] "RemoveContainer" containerID="381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5" May 15 23:27:19.772180 containerd[1456]: time="2025-05-15T23:27:19.772150154Z" level=error msg="ContainerStatus for \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\": not found" May 15 23:27:19.772422 kubelet[2570]: E0515 23:27:19.772298 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\": not found" containerID="381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5" May 15 23:27:19.772422 kubelet[2570]: I0515 23:27:19.772328 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5"} err="failed to get container status \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\": rpc error: code = NotFound desc = an error occurred when try to find container \"381cb113578bf0d7aba7d9c1e69119c38895d97763216de91eb7673936aecab5\": not found" May 15 23:27:19.772422 kubelet[2570]: I0515 23:27:19.772345 2570 scope.go:117] "RemoveContainer" containerID="8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916" May 15 23:27:19.772530 containerd[1456]: time="2025-05-15T23:27:19.772479799Z" level=error msg="ContainerStatus for \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\": not found" May 15 23:27:19.772603 kubelet[2570]: E0515 23:27:19.772555 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\": not found" containerID="8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916" May 15 23:27:19.772603 kubelet[2570]: I0515 23:27:19.772581 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916"} err="failed to get container status \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d82261c45dabc0e8efb85ccc63f05fbc754b77a6a113071507e1466f3f2f916\": not found" May 15 23:27:19.772603 kubelet[2570]: I0515 23:27:19.772597 2570 scope.go:117] "RemoveContainer" containerID="5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299" May 15 23:27:19.772947 containerd[1456]: time="2025-05-15T23:27:19.772905686Z" level=error msg="ContainerStatus for \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\": not found" May 15 23:27:19.773061 kubelet[2570]: E0515 23:27:19.773040 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\": not found" containerID="5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299" May 15 23:27:19.773119 kubelet[2570]: I0515 23:27:19.773065 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299"} err="failed to get container status \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\": rpc error: code = NotFound desc = an error occurred when try to find container \"5cd9dde72c01acd7e9becf75691438bd0a478ffdc8053879f6cc8358b0b08299\": not found" May 15 23:27:19.773119 kubelet[2570]: I0515 23:27:19.773081 2570 scope.go:117] "RemoveContainer" containerID="73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df" May 15 23:27:19.773479 containerd[1456]: time="2025-05-15T23:27:19.773392654Z" level=error msg="ContainerStatus for \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\": not found" May 15 23:27:19.773661 kubelet[2570]: E0515 23:27:19.773613 2570 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\": not found" containerID="73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df" May 15 23:27:19.773661 kubelet[2570]: I0515 23:27:19.773641 2570 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df"} err="failed to get container status \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\": rpc error: code = NotFound desc = an error occurred when try to find container \"73f3f720f96d76a6b8e49b29a1b12e0c9787355148ca1d65a35e65cf05d219df\": not found" May 15 23:27:20.304284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c511790f72093b69c874f65bdf13495bfce1744cbb8cf88de3f1b6f5df594721-shm.mount: Deactivated successfully. May 15 23:27:20.304655 systemd[1]: var-lib-kubelet-pods-1aee7d82\x2de01a\x2d4611\x2dad4e\x2dcff9df216cbe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6rtqb.mount: Deactivated successfully. May 15 23:27:20.304832 systemd[1]: var-lib-kubelet-pods-2562d114\x2d4fd6\x2d4bb9\x2d8af3\x2d2b847b20d342-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds6khb.mount: Deactivated successfully. May 15 23:27:20.304982 systemd[1]: var-lib-kubelet-pods-2562d114\x2d4fd6\x2d4bb9\x2d8af3\x2d2b847b20d342-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 23:27:20.305122 systemd[1]: var-lib-kubelet-pods-2562d114\x2d4fd6\x2d4bb9\x2d8af3\x2d2b847b20d342-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 23:27:20.510720 kubelet[2570]: I0515 23:27:20.510297 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aee7d82-e01a-4611-ad4e-cff9df216cbe" path="/var/lib/kubelet/pods/1aee7d82-e01a-4611-ad4e-cff9df216cbe/volumes" May 15 23:27:20.510720 kubelet[2570]: I0515 23:27:20.510665 2570 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" path="/var/lib/kubelet/pods/2562d114-4fd6-4bb9-8af3-2b847b20d342/volumes" May 15 23:27:21.204631 sshd[4164]: Connection closed by 10.0.0.1 port 46792 May 15 23:27:21.205216 sshd-session[4161]: pam_unix(sshd:session): session closed for user core May 15 23:27:21.219919 systemd[1]: sshd@22-10.0.0.41:22-10.0.0.1:46792.service: Deactivated successfully. May 15 23:27:21.221516 systemd[1]: session-23.scope: Deactivated successfully. May 15 23:27:21.221759 systemd[1]: session-23.scope: Consumed 1.493s CPU time, 28.3M memory peak. May 15 23:27:21.222376 systemd-logind[1440]: Session 23 logged out. Waiting for processes to exit. May 15 23:27:21.224061 systemd[1]: Started sshd@23-10.0.0.41:22-10.0.0.1:46794.service - OpenSSH per-connection server daemon (10.0.0.1:46794). May 15 23:27:21.224715 systemd-logind[1440]: Removed session 23. May 15 23:27:21.274516 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 46794 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:21.276122 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:21.280045 systemd-logind[1440]: New session 24 of user core. May 15 23:27:21.289822 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 23:27:21.508397 kubelet[2570]: E0515 23:27:21.508289 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:21.561555 kubelet[2570]: E0515 23:27:21.561504 2570 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:27:22.490767 sshd[4316]: Connection closed by 10.0.0.1 port 46794 May 15 23:27:22.491461 sshd-session[4313]: pam_unix(sshd:session): session closed for user core May 15 23:27:22.506890 kubelet[2570]: E0515 23:27:22.505682 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" containerName="apply-sysctl-overwrites" May 15 23:27:22.506890 kubelet[2570]: E0515 23:27:22.505730 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" containerName="mount-bpf-fs" May 15 23:27:22.506890 kubelet[2570]: E0515 23:27:22.505737 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" containerName="clean-cilium-state" May 15 23:27:22.506890 kubelet[2570]: E0515 23:27:22.505743 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" containerName="cilium-agent" May 15 23:27:22.506890 kubelet[2570]: E0515 23:27:22.505748 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" containerName="mount-cgroup" May 15 23:27:22.506890 kubelet[2570]: E0515 23:27:22.505753 2570 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1aee7d82-e01a-4611-ad4e-cff9df216cbe" containerName="cilium-operator" May 15 23:27:22.506890 kubelet[2570]: I0515 23:27:22.505774 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="2562d114-4fd6-4bb9-8af3-2b847b20d342" containerName="cilium-agent" May 15 23:27:22.506890 kubelet[2570]: I0515 23:27:22.505780 2570 memory_manager.go:354] "RemoveStaleState removing state" podUID="1aee7d82-e01a-4611-ad4e-cff9df216cbe" containerName="cilium-operator" May 15 23:27:22.507067 systemd[1]: sshd@23-10.0.0.41:22-10.0.0.1:46794.service: Deactivated successfully. May 15 23:27:22.511069 systemd[1]: session-24.scope: Deactivated successfully. May 15 23:27:22.511291 systemd[1]: session-24.scope: Consumed 1.132s CPU time, 26.2M memory peak. May 15 23:27:22.513517 systemd-logind[1440]: Session 24 logged out. Waiting for processes to exit. May 15 23:27:22.517726 systemd[1]: Started sshd@24-10.0.0.41:22-10.0.0.1:54514.service - OpenSSH per-connection server daemon (10.0.0.1:54514). May 15 23:27:22.522044 systemd-logind[1440]: Removed session 24. May 15 23:27:22.532914 systemd[1]: Created slice kubepods-burstable-pod6ad3a81b_551b_4631_97ff_1cb0a3bb9a3b.slice - libcontainer container kubepods-burstable-pod6ad3a81b_551b_4631_97ff_1cb0a3bb9a3b.slice. May 15 23:27:22.569878 sshd[4327]: Accepted publickey for core from 10.0.0.1 port 54514 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:22.571130 sshd-session[4327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:22.575539 systemd-logind[1440]: New session 25 of user core. May 15 23:27:22.587996 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 23:27:22.637311 sshd[4330]: Connection closed by 10.0.0.1 port 54514 May 15 23:27:22.637622 sshd-session[4327]: pam_unix(sshd:session): session closed for user core May 15 23:27:22.649886 systemd[1]: sshd@24-10.0.0.41:22-10.0.0.1:54514.service: Deactivated successfully. May 15 23:27:22.651427 systemd[1]: session-25.scope: Deactivated successfully. May 15 23:27:22.652661 systemd-logind[1440]: Session 25 logged out. Waiting for processes to exit. May 15 23:27:22.653889 kubelet[2570]: I0515 23:27:22.653810 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-cilium-config-path\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.653889 kubelet[2570]: I0515 23:27:22.653851 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-xtables-lock\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.653889 kubelet[2570]: I0515 23:27:22.653869 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbgjg\" (UniqueName: \"kubernetes.io/projected/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-kube-api-access-qbgjg\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.653993 systemd[1]: Started sshd@25-10.0.0.41:22-10.0.0.1:54518.service - OpenSSH per-connection server daemon (10.0.0.1:54518). May 15 23:27:22.654573 kubelet[2570]: I0515 23:27:22.654278 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-cilium-cgroup\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654573 kubelet[2570]: I0515 23:27:22.654306 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-host-proc-sys-net\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654573 kubelet[2570]: I0515 23:27:22.654322 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-lib-modules\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654573 kubelet[2570]: I0515 23:27:22.654357 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-hostproc\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654573 kubelet[2570]: I0515 23:27:22.654376 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-bpf-maps\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654573 kubelet[2570]: I0515 23:27:22.654395 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-cilium-run\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654756 kubelet[2570]: I0515 23:27:22.654412 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-host-proc-sys-kernel\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654756 kubelet[2570]: I0515 23:27:22.654426 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-cni-path\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654756 kubelet[2570]: I0515 23:27:22.654444 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-etc-cni-netd\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654756 kubelet[2570]: I0515 23:27:22.654459 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-hubble-tls\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654756 kubelet[2570]: I0515 23:27:22.654473 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-clustermesh-secrets\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654756 kubelet[2570]: I0515 23:27:22.654488 2570 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b-cilium-ipsec-secrets\") pod \"cilium-n69md\" (UID: \"6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b\") " pod="kube-system/cilium-n69md" May 15 23:27:22.654978 systemd-logind[1440]: Removed session 25. May 15 23:27:22.701602 sshd[4336]: Accepted publickey for core from 10.0.0.1 port 54518 ssh2: RSA SHA256:6GFLX06zIq6nlESG7l1+qHx7vN81iF4ij8UxPyFkEhg May 15 23:27:22.702787 sshd-session[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:27:22.707296 systemd-logind[1440]: New session 26 of user core. May 15 23:27:22.717919 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 23:27:22.839085 kubelet[2570]: E0515 23:27:22.838964 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:22.840338 containerd[1456]: time="2025-05-15T23:27:22.839669554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n69md,Uid:6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b,Namespace:kube-system,Attempt:0,}" May 15 23:27:22.853163 containerd[1456]: time="2025-05-15T23:27:22.853065501Z" level=info msg="connecting to shim 0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4" address="unix:///run/containerd/s/5b5286cc6b4551ee1945bbaa52adc4158e662f27498e5014c26dad34a6baeaf5" namespace=k8s.io protocol=ttrpc version=3 May 15 23:27:22.873849 systemd[1]: Started cri-containerd-0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4.scope - libcontainer container 0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4. May 15 23:27:22.896120 containerd[1456]: time="2025-05-15T23:27:22.896079064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-n69md,Uid:6ad3a81b-551b-4631-97ff-1cb0a3bb9a3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\"" May 15 23:27:22.897123 kubelet[2570]: E0515 23:27:22.897088 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:22.899078 containerd[1456]: time="2025-05-15T23:27:22.899031865Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:27:22.914897 containerd[1456]: time="2025-05-15T23:27:22.914824286Z" level=info msg="Container f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172: CDI devices from CRI Config.CDIDevices: []" May 15 23:27:22.922428 containerd[1456]: time="2025-05-15T23:27:22.922391432Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\"" May 15 23:27:22.923219 containerd[1456]: time="2025-05-15T23:27:22.923014841Z" level=info msg="StartContainer for \"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\"" May 15 23:27:22.923812 containerd[1456]: time="2025-05-15T23:27:22.923776532Z" level=info msg="connecting to shim f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172" address="unix:///run/containerd/s/5b5286cc6b4551ee1945bbaa52adc4158e662f27498e5014c26dad34a6baeaf5" protocol=ttrpc version=3 May 15 23:27:22.940841 systemd[1]: Started cri-containerd-f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172.scope - libcontainer container f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172. May 15 23:27:22.962538 containerd[1456]: time="2025-05-15T23:27:22.962503874Z" level=info msg="StartContainer for \"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\" returns successfully" May 15 23:27:22.975511 systemd[1]: cri-containerd-f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172.scope: Deactivated successfully. May 15 23:27:22.977518 containerd[1456]: time="2025-05-15T23:27:22.977465843Z" level=info msg="received exit event container_id:\"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\" id:\"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\" pid:4409 exited_at:{seconds:1747351642 nanos:977074518}" May 15 23:27:22.977610 containerd[1456]: time="2025-05-15T23:27:22.977525044Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\" id:\"f2b921d0e51bdb4cea94843cd855ef2f4c3bc04e8b7e65781fcd296fce4e1172\" pid:4409 exited_at:{seconds:1747351642 nanos:977074518}" May 15 23:27:23.749028 kubelet[2570]: E0515 23:27:23.748788 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:23.751799 containerd[1456]: time="2025-05-15T23:27:23.751758267Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:27:23.758218 containerd[1456]: time="2025-05-15T23:27:23.758161713Z" level=info msg="Container d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f: CDI devices from CRI Config.CDIDevices: []" May 15 23:27:23.767186 containerd[1456]: time="2025-05-15T23:27:23.767145914Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\"" May 15 23:27:23.767820 containerd[1456]: time="2025-05-15T23:27:23.767734162Z" level=info msg="StartContainer for \"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\"" May 15 23:27:23.768994 containerd[1456]: time="2025-05-15T23:27:23.768908138Z" level=info msg="connecting to shim d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f" address="unix:///run/containerd/s/5b5286cc6b4551ee1945bbaa52adc4158e662f27498e5014c26dad34a6baeaf5" protocol=ttrpc version=3 May 15 23:27:23.787858 systemd[1]: Started cri-containerd-d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f.scope - libcontainer container d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f. May 15 23:27:23.811436 containerd[1456]: time="2025-05-15T23:27:23.811386549Z" level=info msg="StartContainer for \"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\" returns successfully" May 15 23:27:23.819096 systemd[1]: cri-containerd-d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f.scope: Deactivated successfully. May 15 23:27:23.820812 containerd[1456]: time="2025-05-15T23:27:23.820106666Z" level=info msg="received exit event container_id:\"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\" id:\"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\" pid:4456 exited_at:{seconds:1747351643 nanos:819665820}" May 15 23:27:23.820812 containerd[1456]: time="2025-05-15T23:27:23.820411230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\" id:\"d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f\" pid:4456 exited_at:{seconds:1747351643 nanos:819665820}" May 15 23:27:23.837569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d96cf97741ba803c51d0790dcf342b55e1d5d0bbd3911cbfa93ab2721ae7aa1f-rootfs.mount: Deactivated successfully. May 15 23:27:24.507883 kubelet[2570]: E0515 23:27:24.507846 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:24.752797 kubelet[2570]: E0515 23:27:24.752455 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:24.755445 containerd[1456]: time="2025-05-15T23:27:24.755403553Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:27:24.770119 containerd[1456]: time="2025-05-15T23:27:24.769307492Z" level=info msg="Container b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80: CDI devices from CRI Config.CDIDevices: []" May 15 23:27:24.775798 containerd[1456]: time="2025-05-15T23:27:24.775757655Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\"" May 15 23:27:24.776257 containerd[1456]: time="2025-05-15T23:27:24.776233462Z" level=info msg="StartContainer for \"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\"" May 15 23:27:24.777876 containerd[1456]: time="2025-05-15T23:27:24.777827402Z" level=info msg="connecting to shim b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80" address="unix:///run/containerd/s/5b5286cc6b4551ee1945bbaa52adc4158e662f27498e5014c26dad34a6baeaf5" protocol=ttrpc version=3 May 15 23:27:24.798895 systemd[1]: Started cri-containerd-b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80.scope - libcontainer container b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80. May 15 23:27:24.829165 containerd[1456]: time="2025-05-15T23:27:24.829111544Z" level=info msg="StartContainer for \"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\" returns successfully" May 15 23:27:24.829788 systemd[1]: cri-containerd-b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80.scope: Deactivated successfully. May 15 23:27:24.832123 containerd[1456]: time="2025-05-15T23:27:24.831981541Z" level=info msg="received exit event container_id:\"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\" id:\"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\" pid:4499 exited_at:{seconds:1747351644 nanos:831800339}" May 15 23:27:24.832123 containerd[1456]: time="2025-05-15T23:27:24.832076062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\" id:\"b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80\" pid:4499 exited_at:{seconds:1747351644 nanos:831800339}" May 15 23:27:24.850010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6ae375123630bd966944ba6ceaaa65c7626343a1e15e5b703c3baad0aaaff80-rootfs.mount: Deactivated successfully. May 15 23:27:25.756913 kubelet[2570]: E0515 23:27:25.756883 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:25.760093 containerd[1456]: time="2025-05-15T23:27:25.760055598Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:27:25.767677 containerd[1456]: time="2025-05-15T23:27:25.767056524Z" level=info msg="Container 3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24: CDI devices from CRI Config.CDIDevices: []" May 15 23:27:25.774755 containerd[1456]: time="2025-05-15T23:27:25.774718659Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\"" May 15 23:27:25.775180 containerd[1456]: time="2025-05-15T23:27:25.775154305Z" level=info msg="StartContainer for \"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\"" May 15 23:27:25.776267 containerd[1456]: time="2025-05-15T23:27:25.776226518Z" level=info msg="connecting to shim 3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24" address="unix:///run/containerd/s/5b5286cc6b4551ee1945bbaa52adc4158e662f27498e5014c26dad34a6baeaf5" protocol=ttrpc version=3 May 15 23:27:25.795854 systemd[1]: Started cri-containerd-3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24.scope - libcontainer container 3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24. May 15 23:27:25.817721 systemd[1]: cri-containerd-3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24.scope: Deactivated successfully. May 15 23:27:25.819354 containerd[1456]: time="2025-05-15T23:27:25.819046728Z" level=info msg="received exit event container_id:\"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\" id:\"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\" pid:4539 exited_at:{seconds:1747351645 nanos:818737644}" May 15 23:27:25.819529 containerd[1456]: time="2025-05-15T23:27:25.819452333Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\" id:\"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\" pid:4539 exited_at:{seconds:1747351645 nanos:818737644}" May 15 23:27:25.820803 containerd[1456]: time="2025-05-15T23:27:25.820725589Z" level=info msg="StartContainer for \"3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24\" returns successfully" May 15 23:27:25.836295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d3cef3ebb06e7354fec1d7494b36cff3072f581f69505fb5fe0e616718d1d24-rootfs.mount: Deactivated successfully. May 15 23:27:26.509505 kubelet[2570]: E0515 23:27:26.509473 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:26.562918 kubelet[2570]: E0515 23:27:26.562792 2570 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:27:26.763166 kubelet[2570]: E0515 23:27:26.762827 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:26.767959 containerd[1456]: time="2025-05-15T23:27:26.767918964Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:27:26.781628 containerd[1456]: time="2025-05-15T23:27:26.781586206Z" level=info msg="Container 95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4: CDI devices from CRI Config.CDIDevices: []" May 15 23:27:26.784876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050940211.mount: Deactivated successfully. May 15 23:27:26.790747 containerd[1456]: time="2025-05-15T23:27:26.790702834Z" level=info msg="CreateContainer within sandbox \"0c7bb5dac13e435273d26f1817e2f5d3e81986fffb2d676376daa1946194d9b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\"" May 15 23:27:26.791830 containerd[1456]: time="2025-05-15T23:27:26.791736526Z" level=info msg="StartContainer for \"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\"" May 15 23:27:26.792885 containerd[1456]: time="2025-05-15T23:27:26.792861180Z" level=info msg="connecting to shim 95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4" address="unix:///run/containerd/s/5b5286cc6b4551ee1945bbaa52adc4158e662f27498e5014c26dad34a6baeaf5" protocol=ttrpc version=3 May 15 23:27:26.817819 systemd[1]: Started cri-containerd-95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4.scope - libcontainer container 95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4. May 15 23:27:26.843857 containerd[1456]: time="2025-05-15T23:27:26.843742944Z" level=info msg="StartContainer for \"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" returns successfully" May 15 23:27:26.896942 containerd[1456]: time="2025-05-15T23:27:26.896896415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" id:\"924cfe1e723afc898da651681cc47fc2497f0a5bfe8528702b6e850ad1a49a5b\" pid:4605 exited_at:{seconds:1747351646 nanos:896592771}" May 15 23:27:27.113748 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 23:27:27.768957 kubelet[2570]: E0515 23:27:27.768923 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:27.782935 kubelet[2570]: I0515 23:27:27.782885 2570 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-n69md" podStartSLOduration=5.782869506 podStartE2EDuration="5.782869506s" podCreationTimestamp="2025-05-15 23:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:27:27.78229678 +0000 UTC m=+81.351989634" watchObservedRunningTime="2025-05-15 23:27:27.782869506 +0000 UTC m=+81.352562000" May 15 23:27:28.477922 kubelet[2570]: I0515 23:27:28.477530 2570 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T23:27:28Z","lastTransitionTime":"2025-05-15T23:27:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 23:27:28.841115 kubelet[2570]: E0515 23:27:28.840939 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:29.123843 containerd[1456]: time="2025-05-15T23:27:29.123792210Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" id:\"0f288a392a35463899e60b7bdaecb4d5cb013e670df61ab74b4f322ec8bb3d80\" pid:4886 exit_status:1 exited_at:{seconds:1747351649 nanos:123314125}" May 15 23:27:29.919221 systemd-networkd[1389]: lxc_health: Link UP May 15 23:27:29.933173 systemd-networkd[1389]: lxc_health: Gained carrier May 15 23:27:30.841636 kubelet[2570]: E0515 23:27:30.841589 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:31.260742 containerd[1456]: time="2025-05-15T23:27:31.260661157Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" id:\"18f37e713229bbaed748ab3d97a29de757209e939ffe5cf090ac0e6cba0a9460\" pid:5143 exited_at:{seconds:1747351651 nanos:260223593}" May 15 23:27:31.403880 systemd-networkd[1389]: lxc_health: Gained IPv6LL May 15 23:27:31.775536 kubelet[2570]: E0515 23:27:31.775501 2570 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:27:33.420684 containerd[1456]: time="2025-05-15T23:27:33.420629031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" id:\"cccb06d4cf1a0f07fc6d2e47f50935df1c763181c694e467140512a153c002c7\" pid:5176 exited_at:{seconds:1747351653 nanos:420283588}" May 15 23:27:35.520940 containerd[1456]: time="2025-05-15T23:27:35.520896566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" id:\"2539492338654acad4b79f058d18ff43c4737e5c69aaa966c2c6dd56a35b6ac9\" pid:5202 exited_at:{seconds:1747351655 nanos:520292361}" May 15 23:27:37.627225 containerd[1456]: time="2025-05-15T23:27:37.627186890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"95a1fe66ab8779efd58526dea27989226b0fb5b79077c1fc4c0d99548e8e28f4\" id:\"734249bc5f647b57a92cf030d3f6d19ae8d24ef490bc96449e18c182bbb9c4c5\" pid:5226 exited_at:{seconds:1747351657 nanos:626903488}" May 15 23:27:37.631760 sshd[4339]: Connection closed by 10.0.0.1 port 54518 May 15 23:27:37.632447 sshd-session[4336]: pam_unix(sshd:session): session closed for user core May 15 23:27:37.635905 systemd[1]: sshd@25-10.0.0.41:22-10.0.0.1:54518.service: Deactivated successfully. May 15 23:27:37.637664 systemd[1]: session-26.scope: Deactivated successfully. May 15 23:27:37.638328 systemd-logind[1440]: Session 26 logged out. Waiting for processes to exit. May 15 23:27:37.639234 systemd-logind[1440]: Removed session 26.