May 13 00:23:10.883916 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:23:10.883938 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:23:10.883947 kernel: KASLR enabled May 13 00:23:10.883953 kernel: efi: EFI v2.7 by EDK II May 13 00:23:10.883959 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:23:10.883964 kernel: random: crng init done May 13 00:23:10.883971 kernel: ACPI: Early table checksum verification disabled May 13 00:23:10.883977 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:23:10.883983 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:23:10.883991 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.883997 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884002 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884008 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884014 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884022 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884030 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884036 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884042 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:23:10.884049 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:23:10.884055 kernel: NUMA: Failed to initialise from firmware May 13 00:23:10.884061 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:23:10.884068 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 13 00:23:10.884074 kernel: Zone ranges: May 13 00:23:10.884080 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:23:10.884086 kernel: DMA32 empty May 13 00:23:10.884094 kernel: Normal empty May 13 00:23:10.884100 kernel: Movable zone start for each node May 13 00:23:10.884107 kernel: Early memory node ranges May 13 00:23:10.884113 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:23:10.884120 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:23:10.884127 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:23:10.884133 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:23:10.884139 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:23:10.884146 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:23:10.884153 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:23:10.884159 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:23:10.884166 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:23:10.884174 kernel: psci: probing for conduit method from ACPI. May 13 00:23:10.884181 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:23:10.884187 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:23:10.884197 kernel: psci: Trusted OS migration not required May 13 00:23:10.884203 kernel: psci: SMC Calling Convention v1.1 May 13 00:23:10.884210 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:23:10.884218 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:23:10.884225 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:23:10.884232 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:23:10.884238 kernel: Detected PIPT I-cache on CPU0 May 13 00:23:10.884245 kernel: CPU features: detected: GIC system register CPU interface May 13 00:23:10.884252 kernel: CPU features: detected: Hardware dirty bit management May 13 00:23:10.884258 kernel: CPU features: detected: Spectre-v4 May 13 00:23:10.884265 kernel: CPU features: detected: Spectre-BHB May 13 00:23:10.884272 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:23:10.884279 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:23:10.884287 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:23:10.884294 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:23:10.884300 kernel: alternatives: applying boot alternatives May 13 00:23:10.884308 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:23:10.884315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:23:10.884322 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:23:10.884328 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:23:10.884335 kernel: Fallback order for Node 0: 0 May 13 00:23:10.884342 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:23:10.884348 kernel: Policy zone: DMA May 13 00:23:10.884355 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:23:10.884363 kernel: software IO TLB: area num 4. May 13 00:23:10.884371 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:23:10.884378 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) May 13 00:23:10.884385 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:23:10.884391 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:23:10.884399 kernel: rcu: RCU event tracing is enabled. May 13 00:23:10.884406 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:23:10.884413 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:23:10.884420 kernel: Tracing variant of Tasks RCU enabled. May 13 00:23:10.884427 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:23:10.884434 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:23:10.884451 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:23:10.884460 kernel: GICv3: 256 SPIs implemented May 13 00:23:10.884466 kernel: GICv3: 0 Extended SPIs implemented May 13 00:23:10.884473 kernel: Root IRQ handler: gic_handle_irq May 13 00:23:10.884480 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:23:10.884487 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:23:10.884494 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:23:10.884501 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:23:10.884508 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:23:10.884520 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:23:10.884538 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:23:10.884545 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:23:10.884554 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:23:10.884561 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:23:10.884568 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:23:10.884575 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:23:10.884581 kernel: arm-pv: using stolen time PV May 13 00:23:10.884589 kernel: Console: colour dummy device 80x25 May 13 00:23:10.884596 kernel: ACPI: Core revision 20230628 May 13 00:23:10.884603 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:23:10.884610 kernel: pid_max: default: 32768 minimum: 301 May 13 00:23:10.884616 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:23:10.884625 kernel: landlock: Up and running. May 13 00:23:10.884632 kernel: SELinux: Initializing. May 13 00:23:10.884639 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:23:10.884646 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:23:10.884653 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:23:10.884661 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:23:10.884668 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:23:10.884675 kernel: rcu: Hierarchical SRCU implementation. May 13 00:23:10.884682 kernel: rcu: Max phase no-delay instances is 400. May 13 00:23:10.884690 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:23:10.884697 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:23:10.884703 kernel: Remapping and enabling EFI services. May 13 00:23:10.884710 kernel: smp: Bringing up secondary CPUs ... May 13 00:23:10.884718 kernel: Detected PIPT I-cache on CPU1 May 13 00:23:10.884725 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:23:10.884732 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:23:10.884738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:23:10.884745 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:23:10.884753 kernel: Detected PIPT I-cache on CPU2 May 13 00:23:10.884760 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:23:10.884767 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:23:10.884779 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:23:10.884787 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:23:10.884800 kernel: Detected PIPT I-cache on CPU3 May 13 00:23:10.884809 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:23:10.884816 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:23:10.884823 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:23:10.884830 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:23:10.884837 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:23:10.884847 kernel: SMP: Total of 4 processors activated. May 13 00:23:10.884854 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:23:10.884861 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:23:10.884868 kernel: CPU features: detected: Common not Private translations May 13 00:23:10.884875 kernel: CPU features: detected: CRC32 instructions May 13 00:23:10.884882 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:23:10.884891 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:23:10.884898 kernel: CPU features: detected: LSE atomic instructions May 13 00:23:10.884906 kernel: CPU features: detected: Privileged Access Never May 13 00:23:10.884913 kernel: CPU features: detected: RAS Extension Support May 13 00:23:10.884920 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:23:10.884927 kernel: CPU: All CPU(s) started at EL1 May 13 00:23:10.884934 kernel: alternatives: applying system-wide alternatives May 13 00:23:10.884942 kernel: devtmpfs: initialized May 13 00:23:10.884949 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:23:10.884956 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:23:10.884965 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:23:10.884972 kernel: SMBIOS 3.0.0 present. May 13 00:23:10.884980 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:23:10.884987 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:23:10.884994 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:23:10.885002 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:23:10.885010 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:23:10.885018 kernel: audit: initializing netlink subsys (disabled) May 13 00:23:10.885027 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 13 00:23:10.885034 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:23:10.885041 kernel: cpuidle: using governor menu May 13 00:23:10.885049 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:23:10.885057 kernel: ASID allocator initialised with 32768 entries May 13 00:23:10.885064 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:23:10.885071 kernel: Serial: AMBA PL011 UART driver May 13 00:23:10.885079 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:23:10.885086 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:23:10.885094 kernel: Modules: 509008 pages in range for PLT usage May 13 00:23:10.885102 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:23:10.885109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:23:10.885116 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:23:10.885123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:23:10.885130 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:23:10.885138 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:23:10.885145 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:23:10.885152 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:23:10.885159 kernel: ACPI: Added _OSI(Module Device) May 13 00:23:10.885168 kernel: ACPI: Added _OSI(Processor Device) May 13 00:23:10.885175 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:23:10.885182 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:23:10.885189 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:23:10.885196 kernel: ACPI: Interpreter enabled May 13 00:23:10.885203 kernel: ACPI: Using GIC for interrupt routing May 13 00:23:10.885210 kernel: ACPI: MCFG table detected, 1 entries May 13 00:23:10.885218 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:23:10.885225 kernel: printk: console [ttyAMA0] enabled May 13 00:23:10.885233 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:23:10.885383 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:23:10.885501 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:23:10.885573 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:23:10.885637 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:23:10.885700 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:23:10.885709 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:23:10.885721 kernel: PCI host bridge to bus 0000:00 May 13 00:23:10.885791 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:23:10.885864 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:23:10.885927 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:23:10.885989 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:23:10.886067 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:23:10.886144 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:23:10.886233 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:23:10.886311 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:23:10.886393 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:23:10.886481 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:23:10.886554 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:23:10.886619 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:23:10.886685 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:23:10.886746 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:23:10.886820 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:23:10.886831 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:23:10.886839 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:23:10.886846 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:23:10.886853 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:23:10.886861 kernel: iommu: Default domain type: Translated May 13 00:23:10.886871 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:23:10.886878 kernel: efivars: Registered efivars operations May 13 00:23:10.886885 kernel: vgaarb: loaded May 13 00:23:10.886892 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:23:10.886899 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:23:10.886907 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:23:10.886914 kernel: pnp: PnP ACPI init May 13 00:23:10.886986 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:23:10.886996 kernel: pnp: PnP ACPI: found 1 devices May 13 00:23:10.887006 kernel: NET: Registered PF_INET protocol family May 13 00:23:10.887014 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:23:10.887022 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:23:10.887029 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:23:10.887037 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:23:10.887044 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:23:10.887052 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:23:10.887060 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:23:10.887069 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:23:10.887076 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:23:10.887083 kernel: PCI: CLS 0 bytes, default 64 May 13 00:23:10.887091 kernel: kvm [1]: HYP mode not available May 13 00:23:10.887098 kernel: Initialise system trusted keyrings May 13 00:23:10.887105 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:23:10.887113 kernel: Key type asymmetric registered May 13 00:23:10.887120 kernel: Asymmetric key parser 'x509' registered May 13 00:23:10.887128 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:23:10.887136 kernel: io scheduler mq-deadline registered May 13 00:23:10.887144 kernel: io scheduler kyber registered May 13 00:23:10.887152 kernel: io scheduler bfq registered May 13 00:23:10.887159 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:23:10.887167 kernel: ACPI: button: Power Button [PWRB] May 13 00:23:10.887175 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:23:10.887243 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:23:10.887253 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:23:10.887260 kernel: thunder_xcv, ver 1.0 May 13 00:23:10.887267 kernel: thunder_bgx, ver 1.0 May 13 00:23:10.887276 kernel: nicpf, ver 1.0 May 13 00:23:10.887284 kernel: nicvf, ver 1.0 May 13 00:23:10.887361 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:23:10.887423 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:23:10 UTC (1747095790) May 13 00:23:10.887433 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:23:10.887450 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:23:10.887458 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:23:10.887465 kernel: watchdog: Hard watchdog permanently disabled May 13 00:23:10.887474 kernel: NET: Registered PF_INET6 protocol family May 13 00:23:10.887482 kernel: Segment Routing with IPv6 May 13 00:23:10.887489 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:23:10.887496 kernel: NET: Registered PF_PACKET protocol family May 13 00:23:10.887503 kernel: Key type dns_resolver registered May 13 00:23:10.887511 kernel: registered taskstats version 1 May 13 00:23:10.887519 kernel: Loading compiled-in X.509 certificates May 13 00:23:10.887527 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:23:10.887534 kernel: Key type .fscrypt registered May 13 00:23:10.887543 kernel: Key type fscrypt-provisioning registered May 13 00:23:10.887551 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:23:10.887558 kernel: ima: Allocated hash algorithm: sha1 May 13 00:23:10.887566 kernel: ima: No architecture policies found May 13 00:23:10.887574 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:23:10.887582 kernel: clk: Disabling unused clocks May 13 00:23:10.887589 kernel: Freeing unused kernel memory: 39424K May 13 00:23:10.887596 kernel: Run /init as init process May 13 00:23:10.887604 kernel: with arguments: May 13 00:23:10.887612 kernel: /init May 13 00:23:10.887619 kernel: with environment: May 13 00:23:10.887626 kernel: HOME=/ May 13 00:23:10.887633 kernel: TERM=linux May 13 00:23:10.887640 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:23:10.887650 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:23:10.887659 systemd[1]: Detected virtualization kvm. May 13 00:23:10.887669 systemd[1]: Detected architecture arm64. May 13 00:23:10.887676 systemd[1]: Running in initrd. May 13 00:23:10.887684 systemd[1]: No hostname configured, using default hostname. May 13 00:23:10.887691 systemd[1]: Hostname set to . May 13 00:23:10.887711 systemd[1]: Initializing machine ID from VM UUID. May 13 00:23:10.887721 systemd[1]: Queued start job for default target initrd.target. May 13 00:23:10.887729 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:23:10.887749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:23:10.887760 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:23:10.887768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:23:10.887776 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:23:10.887784 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:23:10.887793 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:23:10.887808 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:23:10.887817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:23:10.887827 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:23:10.887834 systemd[1]: Reached target paths.target - Path Units. May 13 00:23:10.887842 systemd[1]: Reached target slices.target - Slice Units. May 13 00:23:10.887850 systemd[1]: Reached target swap.target - Swaps. May 13 00:23:10.887858 systemd[1]: Reached target timers.target - Timer Units. May 13 00:23:10.887866 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:23:10.887873 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:23:10.887881 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:23:10.887889 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:23:10.887899 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:23:10.887907 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:23:10.887915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:23:10.887924 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:23:10.887932 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:23:10.887940 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:23:10.887948 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:23:10.887956 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:23:10.887966 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:23:10.887974 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:23:10.887981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:23:10.887989 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:23:10.887997 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:23:10.888005 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:23:10.888032 systemd-journald[240]: Collecting audit messages is disabled. May 13 00:23:10.888051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:23:10.888059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:23:10.888069 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:23:10.888077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:23:10.888085 systemd-journald[240]: Journal started May 13 00:23:10.888104 systemd-journald[240]: Runtime Journal (/run/log/journal/9ba473ad555345af9dc75b3233072a5a) is 5.9M, max 47.3M, 41.4M free. May 13 00:23:10.875450 systemd-modules-load[241]: Inserted module 'overlay' May 13 00:23:10.890566 systemd-modules-load[241]: Inserted module 'br_netfilter' May 13 00:23:10.892053 kernel: Bridge firewalling registered May 13 00:23:10.892073 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:23:10.893310 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:23:10.894434 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:23:10.904614 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:23:10.906870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:23:10.909599 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:23:10.911779 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:23:10.915532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:23:10.916785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:23:10.919327 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:23:10.921839 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:23:10.924719 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:23:10.934243 dracut-cmdline[277]: dracut-dracut-053 May 13 00:23:10.936670 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:23:10.956102 systemd-resolved[280]: Positive Trust Anchors: May 13 00:23:10.956119 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:23:10.956151 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:23:10.960913 systemd-resolved[280]: Defaulting to hostname 'linux'. May 13 00:23:10.962307 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:23:10.963384 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:23:11.009452 kernel: SCSI subsystem initialized May 13 00:23:11.012467 kernel: Loading iSCSI transport class v2.0-870. May 13 00:23:11.020461 kernel: iscsi: registered transport (tcp) May 13 00:23:11.034588 kernel: iscsi: registered transport (qla4xxx) May 13 00:23:11.034609 kernel: QLogic iSCSI HBA Driver May 13 00:23:11.085371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:23:11.092579 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:23:11.110826 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:23:11.110877 kernel: device-mapper: uevent: version 1.0.3 May 13 00:23:11.110892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:23:11.157464 kernel: raid6: neonx8 gen() 15751 MB/s May 13 00:23:11.174462 kernel: raid6: neonx4 gen() 15657 MB/s May 13 00:23:11.191466 kernel: raid6: neonx2 gen() 13215 MB/s May 13 00:23:11.208453 kernel: raid6: neonx1 gen() 8459 MB/s May 13 00:23:11.225453 kernel: raid6: int64x8 gen() 6966 MB/s May 13 00:23:11.242454 kernel: raid6: int64x4 gen() 7352 MB/s May 13 00:23:11.259454 kernel: raid6: int64x2 gen() 6128 MB/s May 13 00:23:11.276454 kernel: raid6: int64x1 gen() 5058 MB/s May 13 00:23:11.276469 kernel: raid6: using algorithm neonx8 gen() 15751 MB/s May 13 00:23:11.293471 kernel: raid6: .... xor() 11910 MB/s, rmw enabled May 13 00:23:11.293492 kernel: raid6: using neon recovery algorithm May 13 00:23:11.298456 kernel: xor: measuring software checksum speed May 13 00:23:11.298472 kernel: 8regs : 19807 MB/sec May 13 00:23:11.299509 kernel: 32regs : 19631 MB/sec May 13 00:23:11.299522 kernel: arm64_neon : 26981 MB/sec May 13 00:23:11.299531 kernel: xor: using function: arm64_neon (26981 MB/sec) May 13 00:23:11.351479 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:23:11.362404 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:23:11.370607 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:23:11.383715 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 13 00:23:11.386904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:23:11.402783 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:23:11.413760 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation May 13 00:23:11.440624 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:23:11.450601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:23:11.489962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:23:11.496739 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:23:11.509486 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:23:11.512024 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:23:11.514094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:23:11.515988 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:23:11.524832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:23:11.529381 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:23:11.529546 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:23:11.534414 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:23:11.534463 kernel: GPT:9289727 != 19775487 May 13 00:23:11.534475 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:23:11.534485 kernel: GPT:9289727 != 19775487 May 13 00:23:11.534500 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:23:11.534515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:23:11.535189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:23:11.535275 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:23:11.537934 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:23:11.539980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:23:11.540042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:23:11.541723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:23:11.547948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:23:11.550586 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:23:11.558256 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:23:11.561468 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (512) May 13 00:23:11.563385 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:23:11.565484 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (520) May 13 00:23:11.573048 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:23:11.580353 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:23:11.584282 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:23:11.585291 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:23:11.601597 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:23:11.603408 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:23:11.608622 disk-uuid[554]: Primary Header is updated. May 13 00:23:11.608622 disk-uuid[554]: Secondary Entries is updated. May 13 00:23:11.608622 disk-uuid[554]: Secondary Header is updated. May 13 00:23:11.613454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:23:11.627509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:23:12.627469 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:23:12.628302 disk-uuid[555]: The operation has completed successfully. May 13 00:23:12.652142 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:23:12.652238 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:23:12.680661 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:23:12.683497 sh[577]: Success May 13 00:23:12.696624 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:23:12.725041 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:23:12.737699 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:23:12.739121 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:23:12.751669 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:23:12.751703 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:23:12.751713 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:23:12.751723 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:23:12.751733 kernel: BTRFS info (device dm-0): using free space tree May 13 00:23:12.756378 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:23:12.757215 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:23:12.770630 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:23:12.771933 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:23:12.779790 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:23:12.779831 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:23:12.779842 kernel: BTRFS info (device vda6): using free space tree May 13 00:23:12.782465 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:23:12.792824 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:23:12.794542 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:23:12.799290 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:23:12.805590 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:23:12.870560 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:23:12.883591 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:23:12.906133 ignition[668]: Ignition 2.19.0 May 13 00:23:12.906146 ignition[668]: Stage: fetch-offline May 13 00:23:12.906179 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 13 00:23:12.906188 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:23:12.906354 ignition[668]: parsed url from cmdline: "" May 13 00:23:12.906357 ignition[668]: no config URL provided May 13 00:23:12.906362 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:23:12.910243 systemd-networkd[770]: lo: Link UP May 13 00:23:12.906368 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 13 00:23:12.910246 systemd-networkd[770]: lo: Gained carrier May 13 00:23:12.906390 ignition[668]: op(1): [started] loading QEMU firmware config module May 13 00:23:12.910996 systemd-networkd[770]: Enumeration completed May 13 00:23:12.906394 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:23:12.911274 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:23:12.911460 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:23:12.911463 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:23:12.919163 ignition[668]: op(1): [finished] loading QEMU firmware config module May 13 00:23:12.912209 systemd-networkd[770]: eth0: Link UP May 13 00:23:12.912212 systemd-networkd[770]: eth0: Gained carrier May 13 00:23:12.912219 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:23:12.912847 systemd[1]: Reached target network.target - Network. May 13 00:23:12.936492 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:23:12.963565 ignition[668]: parsing config with SHA512: e3fb07e5fda483d0b2ee4b7117ac052a97ebac5e3ad5c494aaffc95bf1a73e685277fe626b5eace95de2c0e329c6307863314fd1dc6e2846b44076864c45ea2f May 13 00:23:12.969302 unknown[668]: fetched base config from "system" May 13 00:23:12.969312 unknown[668]: fetched user config from "qemu" May 13 00:23:12.969790 ignition[668]: fetch-offline: fetch-offline passed May 13 00:23:12.971376 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:23:12.969860 ignition[668]: Ignition finished successfully May 13 00:23:12.973192 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:23:12.985617 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:23:12.999862 ignition[776]: Ignition 2.19.0 May 13 00:23:12.999872 ignition[776]: Stage: kargs May 13 00:23:13.000056 ignition[776]: no configs at "/usr/lib/ignition/base.d" May 13 00:23:13.000065 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:23:13.000988 ignition[776]: kargs: kargs passed May 13 00:23:13.001040 ignition[776]: Ignition finished successfully May 13 00:23:13.005495 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:23:13.022597 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:23:13.031900 ignition[783]: Ignition 2.19.0 May 13 00:23:13.031910 ignition[783]: Stage: disks May 13 00:23:13.032081 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 13 00:23:13.032090 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:23:13.032977 ignition[783]: disks: disks passed May 13 00:23:13.034775 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:23:13.033019 ignition[783]: Ignition finished successfully May 13 00:23:13.036197 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:23:13.038336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:23:13.039991 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:23:13.041362 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:23:13.043118 systemd[1]: Reached target basic.target - Basic System. May 13 00:23:13.052581 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:23:13.063353 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:23:13.067475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:23:13.080525 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:23:13.125204 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:23:13.126562 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:23:13.126257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:23:13.137568 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:23:13.139312 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:23:13.140637 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:23:13.140683 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:23:13.146913 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) May 13 00:23:13.146938 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:23:13.140704 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:23:13.150028 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:23:13.150047 kernel: BTRFS info (device vda6): using free space tree May 13 00:23:13.147700 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:23:13.152270 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:23:13.152243 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:23:13.155324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:23:13.193834 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:23:13.197411 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 13 00:23:13.200492 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:23:13.203645 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:23:13.270650 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:23:13.280519 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:23:13.282968 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:23:13.287470 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:23:13.303285 ignition[914]: INFO : Ignition 2.19.0 May 13 00:23:13.303285 ignition[914]: INFO : Stage: mount May 13 00:23:13.305925 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:23:13.305925 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:23:13.305925 ignition[914]: INFO : mount: mount passed May 13 00:23:13.305925 ignition[914]: INFO : Ignition finished successfully May 13 00:23:13.304496 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:23:13.306838 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:23:13.314567 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:23:13.749642 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:23:13.758608 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:23:13.763461 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) May 13 00:23:13.765705 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:23:13.765735 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:23:13.765745 kernel: BTRFS info (device vda6): using free space tree May 13 00:23:13.767459 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:23:13.768602 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:23:13.785920 ignition[945]: INFO : Ignition 2.19.0 May 13 00:23:13.785920 ignition[945]: INFO : Stage: files May 13 00:23:13.787520 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:23:13.787520 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:23:13.787520 ignition[945]: DEBUG : files: compiled without relabeling support, skipping May 13 00:23:13.791027 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:23:13.791027 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:23:13.791027 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:23:13.795099 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:23:13.795099 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:23:13.795099 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 00:23:13.795099 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 13 00:23:13.791500 unknown[945]: wrote ssh authorized keys file for user: core May 13 00:23:14.047631 systemd-networkd[770]: eth0: Gained IPv6LL May 13 00:23:14.684670 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 00:23:17.976666 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 13 00:23:17.976666 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:23:17.980458 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 00:23:18.319684 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:23:18.432640 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:23:18.434495 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 13 00:23:18.696779 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 13 00:23:19.097394 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 13 00:23:19.097394 ignition[945]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 13 00:23:19.101179 ignition[945]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:23:19.124392 ignition[945]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:23:19.128137 ignition[945]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:23:19.129655 ignition[945]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:23:19.129655 ignition[945]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 13 00:23:19.129655 ignition[945]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:23:19.129655 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:23:19.129655 ignition[945]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:23:19.129655 ignition[945]: INFO : files: files passed May 13 00:23:19.129655 ignition[945]: INFO : Ignition finished successfully May 13 00:23:19.130324 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:23:19.143635 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:23:19.146602 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:23:19.147628 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:23:19.147708 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:23:19.154542 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:23:19.156808 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:23:19.156808 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:23:19.160606 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:23:19.161694 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:23:19.162970 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:23:19.178584 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:23:19.196704 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:23:19.197505 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:23:19.198933 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:23:19.200407 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:23:19.202051 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:23:19.202780 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:23:19.217280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:23:19.224652 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:23:19.233928 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:23:19.235147 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:23:19.237364 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:23:19.238786 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:23:19.238903 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:23:19.241414 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:23:19.242825 systemd[1]: Stopped target basic.target - Basic System. May 13 00:23:19.244266 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:23:19.245791 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:23:19.247509 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:23:19.249275 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:23:19.250894 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:23:19.252615 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:23:19.254314 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:23:19.255819 systemd[1]: Stopped target swap.target - Swaps. May 13 00:23:19.257105 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:23:19.257222 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:23:19.259158 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:23:19.261032 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:23:19.262717 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:23:19.263489 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:23:19.265320 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:23:19.265428 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:23:19.267821 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:23:19.267940 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:23:19.269754 systemd[1]: Stopped target paths.target - Path Units. May 13 00:23:19.271100 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:23:19.276488 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:23:19.277741 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:23:19.279763 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:23:19.281357 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:23:19.281458 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:23:19.282917 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:23:19.283003 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:23:19.284326 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:23:19.284456 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:23:19.285983 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:23:19.286088 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:23:19.298636 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:23:19.299510 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:23:19.299638 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:23:19.304667 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:23:19.305526 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:23:19.305662 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:23:19.308114 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:23:19.308217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:23:19.314457 ignition[998]: INFO : Ignition 2.19.0 May 13 00:23:19.314457 ignition[998]: INFO : Stage: umount May 13 00:23:19.314457 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:23:19.314457 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:23:19.319564 ignition[998]: INFO : umount: umount passed May 13 00:23:19.319564 ignition[998]: INFO : Ignition finished successfully May 13 00:23:19.314998 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:23:19.315080 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:23:19.318677 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:23:19.319133 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:23:19.319981 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:23:19.321637 systemd[1]: Stopped target network.target - Network. May 13 00:23:19.322582 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:23:19.322644 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:23:19.324106 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:23:19.324142 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:23:19.325819 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:23:19.325856 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:23:19.327180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:23:19.327217 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:23:19.328811 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:23:19.330418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:23:19.332298 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:23:19.332383 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:23:19.334116 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:23:19.334192 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:23:19.337574 systemd-networkd[770]: eth0: DHCPv6 lease lost May 13 00:23:19.337641 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:23:19.337736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:23:19.340156 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:23:19.340264 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:23:19.342683 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:23:19.342865 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:23:19.352566 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:23:19.353892 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:23:19.353955 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:23:19.355822 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:23:19.355868 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:23:19.357689 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:23:19.357726 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:23:19.359651 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:23:19.359698 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:23:19.361739 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:23:19.370131 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:23:19.371079 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:23:19.376028 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:23:19.376167 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:23:19.378402 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:23:19.378455 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:23:19.380266 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:23:19.380295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:23:19.382091 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:23:19.382139 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:23:19.384803 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:23:19.384850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:23:19.387499 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:23:19.387546 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:23:19.404597 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:23:19.405633 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:23:19.405700 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:23:19.407819 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:23:19.407869 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:23:19.410032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:23:19.410111 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:23:19.412325 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:23:19.414569 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:23:19.423848 systemd[1]: Switching root. May 13 00:23:19.446290 systemd-journald[240]: Journal stopped May 13 00:23:20.192357 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). May 13 00:23:20.192416 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:23:20.192428 kernel: SELinux: policy capability open_perms=1 May 13 00:23:20.192455 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:23:20.192467 kernel: SELinux: policy capability always_check_network=0 May 13 00:23:20.192483 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:23:20.192494 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:23:20.192504 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:23:20.192514 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:23:20.192524 kernel: audit: type=1403 audit(1747095799.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:23:20.192535 systemd[1]: Successfully loaded SELinux policy in 30.346ms. May 13 00:23:20.192554 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.622ms. May 13 00:23:20.192566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:23:20.192577 systemd[1]: Detected virtualization kvm. May 13 00:23:20.192590 systemd[1]: Detected architecture arm64. May 13 00:23:20.192604 systemd[1]: Detected first boot. May 13 00:23:20.192614 systemd[1]: Initializing machine ID from VM UUID. May 13 00:23:20.192625 zram_generator::config[1043]: No configuration found. May 13 00:23:20.192637 systemd[1]: Populated /etc with preset unit settings. May 13 00:23:20.192647 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:23:20.192657 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:23:20.192668 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:23:20.192680 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:23:20.192691 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:23:20.192702 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:23:20.192712 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:23:20.192724 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:23:20.192735 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:23:20.192754 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:23:20.192765 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:23:20.192776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:23:20.192790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:23:20.192801 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:23:20.192814 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:23:20.192826 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:23:20.192837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:23:20.192848 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:23:20.192859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:23:20.192869 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:23:20.192880 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:23:20.192892 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:23:20.192903 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:23:20.192914 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:23:20.192924 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:23:20.192935 systemd[1]: Reached target slices.target - Slice Units. May 13 00:23:20.192945 systemd[1]: Reached target swap.target - Swaps. May 13 00:23:20.192956 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:23:20.192968 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:23:20.192981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:23:20.192991 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:23:20.193002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:23:20.193013 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:23:20.193023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:23:20.193033 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:23:20.193044 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:23:20.193055 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:23:20.193065 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:23:20.193077 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:23:20.193089 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:23:20.193099 systemd[1]: Reached target machines.target - Containers. May 13 00:23:20.193109 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:23:20.193120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:23:20.193130 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:23:20.193141 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:23:20.193151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:23:20.193164 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:23:20.193176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:23:20.193187 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:23:20.193199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:23:20.193210 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:23:20.193220 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:23:20.193231 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:23:20.193241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:23:20.193254 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:23:20.193265 kernel: fuse: init (API version 7.39) May 13 00:23:20.193275 kernel: loop: module loaded May 13 00:23:20.193284 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:23:20.193295 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:23:20.193305 kernel: ACPI: bus type drm_connector registered May 13 00:23:20.193315 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:23:20.193325 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:23:20.193335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:23:20.193345 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:23:20.193376 systemd-journald[1114]: Collecting audit messages is disabled. May 13 00:23:20.193398 systemd[1]: Stopped verity-setup.service. May 13 00:23:20.193409 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:23:20.193421 systemd-journald[1114]: Journal started May 13 00:23:20.193463 systemd-journald[1114]: Runtime Journal (/run/log/journal/9ba473ad555345af9dc75b3233072a5a) is 5.9M, max 47.3M, 41.4M free. May 13 00:23:20.019004 systemd[1]: Queued start job for default target multi-user.target. May 13 00:23:20.035723 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:23:20.036064 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:23:20.196461 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:23:20.196875 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:23:20.198102 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:23:20.199003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:23:20.199930 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:23:20.201021 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:23:20.202113 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:23:20.203233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:23:20.204455 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:23:20.204644 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:23:20.205816 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:23:20.205965 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:23:20.207057 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:23:20.207191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:23:20.208545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:23:20.208680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:23:20.209885 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:23:20.210022 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:23:20.211080 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:23:20.211211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:23:20.212322 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:23:20.213424 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:23:20.214890 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:23:20.227075 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:23:20.232578 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:23:20.234354 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:23:20.235217 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:23:20.235251 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:23:20.237232 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:23:20.239160 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:23:20.241191 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:23:20.242062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:23:20.243676 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:23:20.247607 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:23:20.249377 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:23:20.253707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:23:20.256517 systemd-journald[1114]: Time spent on flushing to /var/log/journal/9ba473ad555345af9dc75b3233072a5a is 42.784ms for 855 entries. May 13 00:23:20.256517 systemd-journald[1114]: System Journal (/var/log/journal/9ba473ad555345af9dc75b3233072a5a) is 8.0M, max 195.6M, 187.6M free. May 13 00:23:20.303891 systemd-journald[1114]: Received client request to flush runtime journal. May 13 00:23:20.303943 kernel: loop0: detected capacity change from 0 to 114432 May 13 00:23:20.303961 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:23:20.255918 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:23:20.257076 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:23:20.262593 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:23:20.265712 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:23:20.268172 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:23:20.269289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:23:20.270480 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:23:20.271830 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:23:20.273215 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:23:20.276767 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:23:20.291268 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:23:20.296629 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:23:20.300480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:23:20.307391 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:23:20.315419 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:23:20.317057 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:23:20.325634 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:23:20.327963 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:23:20.329444 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:23:20.336468 kernel: loop1: detected capacity change from 0 to 201592 May 13 00:23:20.349112 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. May 13 00:23:20.349130 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. May 13 00:23:20.353180 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:23:20.366510 kernel: loop2: detected capacity change from 0 to 114328 May 13 00:23:20.397479 kernel: loop3: detected capacity change from 0 to 114432 May 13 00:23:20.403505 kernel: loop4: detected capacity change from 0 to 201592 May 13 00:23:20.408452 kernel: loop5: detected capacity change from 0 to 114328 May 13 00:23:20.411385 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:23:20.412165 (sd-merge)[1180]: Merged extensions into '/usr'. May 13 00:23:20.415461 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:23:20.415586 systemd[1]: Reloading... May 13 00:23:20.466473 zram_generator::config[1206]: No configuration found. May 13 00:23:20.537262 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:23:20.563099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:23:20.598987 systemd[1]: Reloading finished in 182 ms. May 13 00:23:20.629512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:23:20.630929 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:23:20.642599 systemd[1]: Starting ensure-sysext.service... May 13 00:23:20.644777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:23:20.657220 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... May 13 00:23:20.657306 systemd[1]: Reloading... May 13 00:23:20.665603 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:23:20.665879 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:23:20.666506 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:23:20.666726 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 13 00:23:20.666790 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. May 13 00:23:20.669650 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:23:20.669659 systemd-tmpfiles[1241]: Skipping /boot May 13 00:23:20.677562 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:23:20.677577 systemd-tmpfiles[1241]: Skipping /boot May 13 00:23:20.703730 zram_generator::config[1271]: No configuration found. May 13 00:23:20.785221 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:23:20.820359 systemd[1]: Reloading finished in 162 ms. May 13 00:23:20.834421 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:23:20.841877 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:23:20.849263 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:23:20.851378 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:23:20.853392 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:23:20.859779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:23:20.867168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:23:20.871679 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:23:20.875610 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:23:20.878760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:23:20.881065 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:23:20.883782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:23:20.885542 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:23:20.887337 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:23:20.890733 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:23:20.892550 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:23:20.892672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:23:20.894091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:23:20.894211 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:23:20.896051 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:23:20.896177 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:23:20.907857 systemd-udevd[1310]: Using default interface naming scheme 'v255'. May 13 00:23:20.909463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:23:20.921762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:23:20.929764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:23:20.934694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:23:20.935632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:23:20.938048 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:23:20.940648 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:23:20.944980 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:23:20.946604 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:23:20.948139 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:23:20.949627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:23:20.950511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:23:20.958343 augenrules[1358]: No rules May 13 00:23:20.961009 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:23:20.973921 systemd[1]: Finished ensure-sysext.service. May 13 00:23:20.976812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:23:20.976948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:23:20.980855 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:23:20.981293 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:23:20.992551 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1335) May 13 00:23:20.994664 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:23:21.001418 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:23:21.004611 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:23:21.007763 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:23:21.008523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:23:21.012480 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:23:21.014519 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:23:21.014993 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:23:21.015128 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:23:21.017795 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:23:21.017935 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:23:21.019042 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:23:21.019167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:23:21.025444 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:23:21.040817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:23:21.056732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:23:21.072601 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:23:21.083546 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:23:21.095520 systemd-resolved[1308]: Positive Trust Anchors: May 13 00:23:21.095792 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:23:21.095882 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:23:21.106298 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:23:21.108921 systemd-resolved[1308]: Defaulting to hostname 'linux'. May 13 00:23:21.110355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:23:21.111937 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:23:21.113794 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:23:21.120703 systemd-networkd[1377]: lo: Link UP May 13 00:23:21.120709 systemd-networkd[1377]: lo: Gained carrier May 13 00:23:21.121611 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:23:21.122815 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:23:21.124158 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:23:21.125749 systemd-networkd[1377]: Enumeration completed May 13 00:23:21.125893 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:23:21.126633 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:23:21.126719 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:23:21.127362 systemd[1]: Reached target network.target - Network. May 13 00:23:21.129540 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:23:21.129729 systemd-networkd[1377]: eth0: Link UP May 13 00:23:21.129787 systemd-networkd[1377]: eth0: Gained carrier May 13 00:23:21.129856 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:23:21.139872 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:23:21.153478 systemd-networkd[1377]: eth0: DHCPv4 address 10.0.0.75/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:23:21.154429 systemd-timesyncd[1378]: Network configuration changed, trying to establish connection. May 13 00:23:21.156008 systemd-timesyncd[1378]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:23:21.156058 systemd-timesyncd[1378]: Initial clock synchronization to Tue 2025-05-13 00:23:21.375678 UTC. May 13 00:23:21.164477 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:23:21.180856 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:23:21.182347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:23:21.183488 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:23:21.184626 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:23:21.185868 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:23:21.187267 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:23:21.188553 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:23:21.189827 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:23:21.191059 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:23:21.191097 systemd[1]: Reached target paths.target - Path Units. May 13 00:23:21.192033 systemd[1]: Reached target timers.target - Timer Units. May 13 00:23:21.193870 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:23:21.196164 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:23:21.204321 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:23:21.206434 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:23:21.208047 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:23:21.209190 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:23:21.210206 systemd[1]: Reached target basic.target - Basic System. May 13 00:23:21.211165 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:23:21.211198 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:23:21.212025 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:23:21.213620 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:23:21.214621 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:23:21.217084 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:23:21.221734 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:23:21.222597 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:23:21.226680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:23:21.229974 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:23:21.234063 jq[1409]: false May 13 00:23:21.235883 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:23:21.239477 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:23:21.241596 extend-filesystems[1410]: Found loop3 May 13 00:23:21.241596 extend-filesystems[1410]: Found loop4 May 13 00:23:21.241596 extend-filesystems[1410]: Found loop5 May 13 00:23:21.241596 extend-filesystems[1410]: Found vda May 13 00:23:21.241596 extend-filesystems[1410]: Found vda1 May 13 00:23:21.241596 extend-filesystems[1410]: Found vda2 May 13 00:23:21.241596 extend-filesystems[1410]: Found vda3 May 13 00:23:21.241596 extend-filesystems[1410]: Found usr May 13 00:23:21.241596 extend-filesystems[1410]: Found vda4 May 13 00:23:21.241596 extend-filesystems[1410]: Found vda6 May 13 00:23:21.241596 extend-filesystems[1410]: Found vda7 May 13 00:23:21.241596 extend-filesystems[1410]: Found vda9 May 13 00:23:21.241596 extend-filesystems[1410]: Checking size of /dev/vda9 May 13 00:23:21.242720 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:23:21.245052 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:23:21.245464 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:23:21.246690 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:23:21.250513 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:23:21.263935 jq[1425]: true May 13 00:23:21.255001 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:23:21.264985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:23:21.265164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:23:21.265422 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:23:21.265580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:23:21.267756 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:23:21.267903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:23:21.269684 extend-filesystems[1410]: Resized partition /dev/vda9 May 13 00:23:21.275090 dbus-daemon[1408]: [system] SELinux support is enabled May 13 00:23:21.278107 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:23:21.289250 extend-filesystems[1435]: resize2fs 1.47.1 (20-May-2024) May 13 00:23:21.304124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1343) May 13 00:23:21.304347 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:23:21.304774 jq[1433]: true May 13 00:23:21.307785 update_engine[1423]: I20250513 00:23:21.305886 1423 main.cc:92] Flatcar Update Engine starting May 13 00:23:21.307227 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:23:21.313006 update_engine[1423]: I20250513 00:23:21.309835 1423 update_check_scheduler.cc:74] Next update check in 8m39s May 13 00:23:21.316619 systemd[1]: Started update-engine.service - Update Engine. May 13 00:23:21.320013 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:23:21.320689 systemd-logind[1422]: New seat seat0. May 13 00:23:21.321287 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:23:21.321331 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:23:21.322695 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:23:21.322714 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:23:21.340463 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:23:21.353127 extend-filesystems[1435]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:23:21.353127 extend-filesystems[1435]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:23:21.353127 extend-filesystems[1435]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:23:21.355675 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:23:21.357300 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:23:21.363317 tar[1432]: linux-arm64/LICENSE May 13 00:23:21.363551 tar[1432]: linux-arm64/helm May 13 00:23:21.363664 extend-filesystems[1410]: Resized filesystem in /dev/vda9 May 13 00:23:21.366195 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:23:21.366399 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:23:21.391455 bash[1462]: Updated "/home/core/.ssh/authorized_keys" May 13 00:23:21.394188 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:23:21.397026 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:23:21.410899 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:23:21.516719 containerd[1434]: time="2025-05-13T00:23:21.516629520Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:23:21.544204 containerd[1434]: time="2025-05-13T00:23:21.544145080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.546391 containerd[1434]: time="2025-05-13T00:23:21.546351280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:23:21.546391 containerd[1434]: time="2025-05-13T00:23:21.546384400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:23:21.546515 containerd[1434]: time="2025-05-13T00:23:21.546401440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:23:21.546608 containerd[1434]: time="2025-05-13T00:23:21.546578800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:23:21.546608 containerd[1434]: time="2025-05-13T00:23:21.546602640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.546677 containerd[1434]: time="2025-05-13T00:23:21.546659880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:23:21.546677 containerd[1434]: time="2025-05-13T00:23:21.546675720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.546866 containerd[1434]: time="2025-05-13T00:23:21.546846200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:23:21.546956 containerd[1434]: time="2025-05-13T00:23:21.546867680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.546956 containerd[1434]: time="2025-05-13T00:23:21.546882720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:23:21.546956 containerd[1434]: time="2025-05-13T00:23:21.546893680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.547043 containerd[1434]: time="2025-05-13T00:23:21.546962240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.547169 containerd[1434]: time="2025-05-13T00:23:21.547149280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:23:21.547269 containerd[1434]: time="2025-05-13T00:23:21.547247600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:23:21.547269 containerd[1434]: time="2025-05-13T00:23:21.547266680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:23:21.547399 containerd[1434]: time="2025-05-13T00:23:21.547344200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:23:21.547399 containerd[1434]: time="2025-05-13T00:23:21.547393360Z" level=info msg="metadata content store policy set" policy=shared May 13 00:23:21.551336 containerd[1434]: time="2025-05-13T00:23:21.551304560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:23:21.551336 containerd[1434]: time="2025-05-13T00:23:21.551350240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:23:21.551463 containerd[1434]: time="2025-05-13T00:23:21.551366680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:23:21.551463 containerd[1434]: time="2025-05-13T00:23:21.551393280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:23:21.551463 containerd[1434]: time="2025-05-13T00:23:21.551409520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:23:21.551653 containerd[1434]: time="2025-05-13T00:23:21.551554480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:23:21.551850 containerd[1434]: time="2025-05-13T00:23:21.551821240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552151120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552185760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552200320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552215920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552231000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552244880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552259600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552274680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552288360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552307800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552320000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552341400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552356120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:23:21.552830 containerd[1434]: time="2025-05-13T00:23:21.552368240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552380200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552392360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552410760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552427680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552458160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552472720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552488480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552500120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552511320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552523080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552540320Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552561000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552572680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:23:21.553104 containerd[1434]: time="2025-05-13T00:23:21.552584640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:23:21.554218 containerd[1434]: time="2025-05-13T00:23:21.554185520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:23:21.554509 containerd[1434]: time="2025-05-13T00:23:21.554368400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:23:21.554509 containerd[1434]: time="2025-05-13T00:23:21.554390000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:23:21.554509 containerd[1434]: time="2025-05-13T00:23:21.554403280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:23:21.554509 containerd[1434]: time="2025-05-13T00:23:21.554412760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:23:21.554509 containerd[1434]: time="2025-05-13T00:23:21.554427920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:23:21.554702 containerd[1434]: time="2025-05-13T00:23:21.554451720Z" level=info msg="NRI interface is disabled by configuration." May 13 00:23:21.554766 containerd[1434]: time="2025-05-13T00:23:21.554637240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:23:21.556006 containerd[1434]: time="2025-05-13T00:23:21.555358120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:23:21.556006 containerd[1434]: time="2025-05-13T00:23:21.555432920Z" level=info msg="Connect containerd service" May 13 00:23:21.556006 containerd[1434]: time="2025-05-13T00:23:21.555487440Z" level=info msg="using legacy CRI server" May 13 00:23:21.556006 containerd[1434]: time="2025-05-13T00:23:21.555495480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:23:21.556006 containerd[1434]: time="2025-05-13T00:23:21.555576040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:23:21.556808 containerd[1434]: time="2025-05-13T00:23:21.556772960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:23:21.557555 containerd[1434]: time="2025-05-13T00:23:21.557145160Z" level=info msg="Start subscribing containerd event" May 13 00:23:21.557722 containerd[1434]: time="2025-05-13T00:23:21.557651200Z" level=info msg="Start recovering state" May 13 00:23:21.558090 containerd[1434]: time="2025-05-13T00:23:21.557865480Z" level=info msg="Start event monitor" May 13 00:23:21.558090 containerd[1434]: time="2025-05-13T00:23:21.557889080Z" level=info msg="Start snapshots syncer" May 13 00:23:21.558090 containerd[1434]: time="2025-05-13T00:23:21.557899280Z" level=info msg="Start cni network conf syncer for default" May 13 00:23:21.558090 containerd[1434]: time="2025-05-13T00:23:21.557906680Z" level=info msg="Start streaming server" May 13 00:23:21.558730 containerd[1434]: time="2025-05-13T00:23:21.558521120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:23:21.558730 containerd[1434]: time="2025-05-13T00:23:21.558574440Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:23:21.559160 containerd[1434]: time="2025-05-13T00:23:21.559132320Z" level=info msg="containerd successfully booted in 0.044220s" May 13 00:23:21.559197 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:23:21.722077 tar[1432]: linux-arm64/README.md May 13 00:23:21.738077 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:23:21.857988 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:23:21.876969 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:23:21.890679 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:23:21.896089 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:23:21.896267 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:23:21.898953 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:23:21.910528 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:23:21.914700 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:23:21.916554 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:23:21.917859 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:23:23.211159 systemd-networkd[1377]: eth0: Gained IPv6LL May 13 00:23:23.214546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:23:23.216448 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:23:23.228721 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:23:23.231474 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:23:23.237264 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:23:23.252999 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:23:23.253886 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:23:23.258245 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:23:23.269148 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:23:23.909539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:23.911121 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:23:23.913730 systemd[1]: Startup finished in 556ms (kernel) + 8.952s (initrd) + 4.292s (userspace) = 13.801s. May 13 00:23:23.914031 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:23:24.017631 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:23:24.019190 systemd[1]: Started sshd@0-10.0.0.75:22-10.0.0.1:51804.service - OpenSSH per-connection server daemon (10.0.0.1:51804). May 13 00:23:24.071755 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 51804 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:24.073615 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:24.088519 systemd-logind[1422]: New session 1 of user core. May 13 00:23:24.089526 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:23:24.100709 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:23:24.114766 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:23:24.118594 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:23:24.127217 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:23:24.207910 systemd[1536]: Queued start job for default target default.target. May 13 00:23:24.222429 systemd[1536]: Created slice app.slice - User Application Slice. May 13 00:23:24.222477 systemd[1536]: Reached target paths.target - Paths. May 13 00:23:24.222492 systemd[1536]: Reached target timers.target - Timers. May 13 00:23:24.224679 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:23:24.235839 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:23:24.235959 systemd[1536]: Reached target sockets.target - Sockets. May 13 00:23:24.235973 systemd[1536]: Reached target basic.target - Basic System. May 13 00:23:24.236009 systemd[1536]: Reached target default.target - Main User Target. May 13 00:23:24.236036 systemd[1536]: Startup finished in 103ms. May 13 00:23:24.236159 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:23:24.237561 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:23:24.307411 systemd[1]: Started sshd@1-10.0.0.75:22-10.0.0.1:51816.service - OpenSSH per-connection server daemon (10.0.0.1:51816). May 13 00:23:24.353185 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 51816 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:24.355554 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:24.359625 systemd-logind[1422]: New session 2 of user core. May 13 00:23:24.376631 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:23:24.433692 sshd[1548]: pam_unix(sshd:session): session closed for user core May 13 00:23:24.440888 systemd[1]: sshd@1-10.0.0.75:22-10.0.0.1:51816.service: Deactivated successfully. May 13 00:23:24.443823 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:23:24.445735 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. May 13 00:23:24.453864 systemd[1]: Started sshd@2-10.0.0.75:22-10.0.0.1:51832.service - OpenSSH per-connection server daemon (10.0.0.1:51832). May 13 00:23:24.455385 systemd-logind[1422]: Removed session 2. May 13 00:23:24.461504 kubelet[1521]: E0513 00:23:24.461389 1521 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:23:24.464672 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:23:24.464800 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:23:24.483921 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 51832 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:24.485451 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:24.490535 systemd-logind[1422]: New session 3 of user core. May 13 00:23:24.498647 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:23:24.551870 sshd[1555]: pam_unix(sshd:session): session closed for user core May 13 00:23:24.570734 systemd[1]: sshd@2-10.0.0.75:22-10.0.0.1:51832.service: Deactivated successfully. May 13 00:23:24.573831 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:23:24.575656 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. May 13 00:23:24.594368 systemd[1]: Started sshd@3-10.0.0.75:22-10.0.0.1:51840.service - OpenSSH per-connection server daemon (10.0.0.1:51840). May 13 00:23:24.595778 systemd-logind[1422]: Removed session 3. May 13 00:23:24.623766 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 51840 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:24.625035 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:24.628746 systemd-logind[1422]: New session 4 of user core. May 13 00:23:24.636615 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:23:24.700836 sshd[1563]: pam_unix(sshd:session): session closed for user core May 13 00:23:24.714076 systemd[1]: sshd@3-10.0.0.75:22-10.0.0.1:51840.service: Deactivated successfully. May 13 00:23:24.716109 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:23:24.717635 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. May 13 00:23:24.728792 systemd[1]: Started sshd@4-10.0.0.75:22-10.0.0.1:51846.service - OpenSSH per-connection server daemon (10.0.0.1:51846). May 13 00:23:24.729824 systemd-logind[1422]: Removed session 4. May 13 00:23:24.757894 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 51846 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:24.759305 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:24.763484 systemd-logind[1422]: New session 5 of user core. May 13 00:23:24.769610 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:23:24.826450 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:23:24.827113 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:23:24.845370 sudo[1573]: pam_unix(sudo:session): session closed for user root May 13 00:23:24.847393 sshd[1570]: pam_unix(sshd:session): session closed for user core May 13 00:23:24.859984 systemd[1]: sshd@4-10.0.0.75:22-10.0.0.1:51846.service: Deactivated successfully. May 13 00:23:24.861366 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:23:24.864730 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. May 13 00:23:24.869702 systemd[1]: Started sshd@5-10.0.0.75:22-10.0.0.1:51848.service - OpenSSH per-connection server daemon (10.0.0.1:51848). May 13 00:23:24.870500 systemd-logind[1422]: Removed session 5. May 13 00:23:24.899009 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 51848 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:24.900218 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:24.903693 systemd-logind[1422]: New session 6 of user core. May 13 00:23:24.909590 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:23:24.960803 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:23:24.961077 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:23:24.963990 sudo[1582]: pam_unix(sudo:session): session closed for user root May 13 00:23:24.968363 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:23:24.968668 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:23:24.988702 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:23:24.989971 auditctl[1585]: No rules May 13 00:23:24.990807 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:23:24.991051 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:23:24.992615 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:23:25.015378 augenrules[1603]: No rules May 13 00:23:25.016752 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:23:25.018659 sudo[1581]: pam_unix(sudo:session): session closed for user root May 13 00:23:25.020268 sshd[1578]: pam_unix(sshd:session): session closed for user core May 13 00:23:25.031906 systemd[1]: sshd@5-10.0.0.75:22-10.0.0.1:51848.service: Deactivated successfully. May 13 00:23:25.033305 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:23:25.035519 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. May 13 00:23:25.044708 systemd[1]: Started sshd@6-10.0.0.75:22-10.0.0.1:51858.service - OpenSSH per-connection server daemon (10.0.0.1:51858). May 13 00:23:25.045510 systemd-logind[1422]: Removed session 6. May 13 00:23:25.073062 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 51858 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:23:25.074219 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:23:25.077556 systemd-logind[1422]: New session 7 of user core. May 13 00:23:25.090603 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:23:25.140899 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:23:25.141473 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:23:25.469733 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:23:25.469824 (dockerd)[1632]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:23:25.777663 dockerd[1632]: time="2025-05-13T00:23:25.777501859Z" level=info msg="Starting up" May 13 00:23:26.051947 dockerd[1632]: time="2025-05-13T00:23:26.051824856Z" level=info msg="Loading containers: start." May 13 00:23:26.138543 kernel: Initializing XFRM netlink socket May 13 00:23:26.203187 systemd-networkd[1377]: docker0: Link UP May 13 00:23:26.228908 dockerd[1632]: time="2025-05-13T00:23:26.228818864Z" level=info msg="Loading containers: done." May 13 00:23:26.243481 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3617396221-merged.mount: Deactivated successfully. May 13 00:23:26.244940 dockerd[1632]: time="2025-05-13T00:23:26.244887133Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:23:26.245036 dockerd[1632]: time="2025-05-13T00:23:26.244996750Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:23:26.245117 dockerd[1632]: time="2025-05-13T00:23:26.245098604Z" level=info msg="Daemon has completed initialization" May 13 00:23:26.271260 dockerd[1632]: time="2025-05-13T00:23:26.271117193Z" level=info msg="API listen on /run/docker.sock" May 13 00:23:26.271972 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:23:26.902178 containerd[1434]: time="2025-05-13T00:23:26.902129120Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 13 00:23:27.539967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032477446.mount: Deactivated successfully. May 13 00:23:28.405502 containerd[1434]: time="2025-05-13T00:23:28.405429149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:28.406054 containerd[1434]: time="2025-05-13T00:23:28.406002086Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 13 00:23:28.406815 containerd[1434]: time="2025-05-13T00:23:28.406784818Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:28.410124 containerd[1434]: time="2025-05-13T00:23:28.410066016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:28.411369 containerd[1434]: time="2025-05-13T00:23:28.411113775Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 1.508937793s" May 13 00:23:28.411369 containerd[1434]: time="2025-05-13T00:23:28.411155605Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 13 00:23:28.415143 containerd[1434]: time="2025-05-13T00:23:28.415046994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 13 00:23:29.452754 containerd[1434]: time="2025-05-13T00:23:29.452704856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:29.453655 containerd[1434]: time="2025-05-13T00:23:29.453370519Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 13 00:23:29.455569 containerd[1434]: time="2025-05-13T00:23:29.454406697Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:29.457636 containerd[1434]: time="2025-05-13T00:23:29.457597994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:29.458938 containerd[1434]: time="2025-05-13T00:23:29.458842475Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.043749093s" May 13 00:23:29.458938 containerd[1434]: time="2025-05-13T00:23:29.458879955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 13 00:23:29.459586 containerd[1434]: time="2025-05-13T00:23:29.459345483Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 13 00:23:30.649487 containerd[1434]: time="2025-05-13T00:23:30.649425584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:30.650189 containerd[1434]: time="2025-05-13T00:23:30.650151330Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 13 00:23:30.650669 containerd[1434]: time="2025-05-13T00:23:30.650625564Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:30.653534 containerd[1434]: time="2025-05-13T00:23:30.653477107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:30.654835 containerd[1434]: time="2025-05-13T00:23:30.654786955Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.195402429s" May 13 00:23:30.654835 containerd[1434]: time="2025-05-13T00:23:30.654825798Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 13 00:23:30.655503 containerd[1434]: time="2025-05-13T00:23:30.655266398Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 13 00:23:31.609845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836366669.mount: Deactivated successfully. May 13 00:23:31.951245 containerd[1434]: time="2025-05-13T00:23:31.951114235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:31.951744 containerd[1434]: time="2025-05-13T00:23:31.951703784Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 13 00:23:31.952518 containerd[1434]: time="2025-05-13T00:23:31.952482483Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:31.954512 containerd[1434]: time="2025-05-13T00:23:31.954433705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:31.955246 containerd[1434]: time="2025-05-13T00:23:31.955032208Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.299730774s" May 13 00:23:31.955246 containerd[1434]: time="2025-05-13T00:23:31.955063302Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 13 00:23:31.955637 containerd[1434]: time="2025-05-13T00:23:31.955608851Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 13 00:23:32.498508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225156295.mount: Deactivated successfully. May 13 00:23:33.199220 containerd[1434]: time="2025-05-13T00:23:33.199040740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:33.200150 containerd[1434]: time="2025-05-13T00:23:33.199864234Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 13 00:23:33.200973 containerd[1434]: time="2025-05-13T00:23:33.200930896Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:33.204351 containerd[1434]: time="2025-05-13T00:23:33.204295638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:33.205809 containerd[1434]: time="2025-05-13T00:23:33.205756779Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.250096392s" May 13 00:23:33.205809 containerd[1434]: time="2025-05-13T00:23:33.205805606Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 13 00:23:33.206485 containerd[1434]: time="2025-05-13T00:23:33.206284898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 13 00:23:33.672183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055350126.mount: Deactivated successfully. May 13 00:23:33.677109 containerd[1434]: time="2025-05-13T00:23:33.677052236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:33.677846 containerd[1434]: time="2025-05-13T00:23:33.677816639Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 13 00:23:33.678516 containerd[1434]: time="2025-05-13T00:23:33.678475781Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:33.680785 containerd[1434]: time="2025-05-13T00:23:33.680742584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:33.681604 containerd[1434]: time="2025-05-13T00:23:33.681564267Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.245598ms" May 13 00:23:33.681648 containerd[1434]: time="2025-05-13T00:23:33.681611806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 13 00:23:33.682086 containerd[1434]: time="2025-05-13T00:23:33.682052938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 13 00:23:34.200050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1589367386.mount: Deactivated successfully. May 13 00:23:34.715312 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:23:34.724645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:23:34.823224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:34.827692 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:23:34.864150 kubelet[1964]: E0513 00:23:34.863953 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:23:34.867154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:23:34.867302 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:23:35.840120 containerd[1434]: time="2025-05-13T00:23:35.840052506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:35.840589 containerd[1434]: time="2025-05-13T00:23:35.840542466Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 13 00:23:35.841615 containerd[1434]: time="2025-05-13T00:23:35.841574358Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:35.847263 containerd[1434]: time="2025-05-13T00:23:35.847210713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:35.849149 containerd[1434]: time="2025-05-13T00:23:35.849104236Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.167015246s" May 13 00:23:35.849190 containerd[1434]: time="2025-05-13T00:23:35.849149252Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 13 00:23:41.679648 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:41.690644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:23:41.714919 systemd[1]: Reloading requested from client PID 2008 ('systemctl') (unit session-7.scope)... May 13 00:23:41.714937 systemd[1]: Reloading... May 13 00:23:41.780469 zram_generator::config[2044]: No configuration found. May 13 00:23:41.888298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:23:41.942071 systemd[1]: Reloading finished in 226 ms. May 13 00:23:41.981332 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:23:41.981402 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:23:41.982573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:41.984315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:23:42.091417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:42.095863 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:23:42.129574 kubelet[2092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:23:42.129574 kubelet[2092]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:23:42.129574 kubelet[2092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:23:42.129918 kubelet[2092]: I0513 00:23:42.129640 2092 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:23:43.627480 kubelet[2092]: I0513 00:23:43.627419 2092 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:23:43.627480 kubelet[2092]: I0513 00:23:43.627470 2092 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:23:43.627838 kubelet[2092]: I0513 00:23:43.627793 2092 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:23:43.673179 kubelet[2092]: E0513 00:23:43.672625 2092 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.75:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:43.674826 kubelet[2092]: I0513 00:23:43.674795 2092 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:23:43.680563 kubelet[2092]: E0513 00:23:43.680531 2092 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:23:43.681214 kubelet[2092]: I0513 00:23:43.680680 2092 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:23:43.683200 kubelet[2092]: I0513 00:23:43.683180 2092 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:23:43.683808 kubelet[2092]: I0513 00:23:43.683761 2092 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:23:43.683968 kubelet[2092]: I0513 00:23:43.683803 2092 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:23:43.684055 kubelet[2092]: I0513 00:23:43.684041 2092 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:23:43.684055 kubelet[2092]: I0513 00:23:43.684051 2092 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:23:43.684258 kubelet[2092]: I0513 00:23:43.684238 2092 state_mem.go:36] "Initialized new in-memory state store" May 13 00:23:43.686733 kubelet[2092]: I0513 00:23:43.686709 2092 kubelet.go:446] "Attempting to sync node with API server" May 13 00:23:43.686733 kubelet[2092]: I0513 00:23:43.686732 2092 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:23:43.689459 kubelet[2092]: I0513 00:23:43.686749 2092 kubelet.go:352] "Adding apiserver pod source" May 13 00:23:43.689459 kubelet[2092]: I0513 00:23:43.686760 2092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:23:43.691457 kubelet[2092]: W0513 00:23:43.691408 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:43.691522 kubelet[2092]: E0513 00:23:43.691484 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:43.691700 kubelet[2092]: W0513 00:23:43.691655 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:43.691736 kubelet[2092]: E0513 00:23:43.691705 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.75:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:43.692952 kubelet[2092]: I0513 00:23:43.692930 2092 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:23:43.693598 kubelet[2092]: I0513 00:23:43.693578 2092 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:23:43.693709 kubelet[2092]: W0513 00:23:43.693695 2092 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:23:43.694553 kubelet[2092]: I0513 00:23:43.694533 2092 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:23:43.694613 kubelet[2092]: I0513 00:23:43.694569 2092 server.go:1287] "Started kubelet" May 13 00:23:43.695152 kubelet[2092]: I0513 00:23:43.695122 2092 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:23:43.699404 kubelet[2092]: I0513 00:23:43.699380 2092 server.go:490] "Adding debug handlers to kubelet server" May 13 00:23:43.701742 kubelet[2092]: I0513 00:23:43.700482 2092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:23:43.701742 kubelet[2092]: I0513 00:23:43.699629 2092 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:23:43.701742 kubelet[2092]: I0513 00:23:43.701362 2092 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:23:43.702199 kubelet[2092]: I0513 00:23:43.702165 2092 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:23:43.702397 kubelet[2092]: E0513 00:23:43.702071 2092 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.75:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.75:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eee6527e1cd34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:23:43.694548276 +0000 UTC m=+1.595455564,LastTimestamp:2025-05-13 00:23:43.694548276 +0000 UTC m=+1.595455564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:23:43.702961 kubelet[2092]: E0513 00:23:43.702942 2092 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:23:43.703066 kubelet[2092]: I0513 00:23:43.703055 2092 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:23:43.703363 kubelet[2092]: I0513 00:23:43.703343 2092 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:23:43.703515 kubelet[2092]: I0513 00:23:43.703501 2092 reconciler.go:26] "Reconciler: start to sync state" May 13 00:23:43.703956 kubelet[2092]: W0513 00:23:43.703917 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:43.704060 kubelet[2092]: E0513 00:23:43.704042 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:43.704296 kubelet[2092]: I0513 00:23:43.704278 2092 factory.go:221] Registration of the systemd container factory successfully May 13 00:23:43.704783 kubelet[2092]: I0513 00:23:43.704752 2092 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:23:43.705477 kubelet[2092]: E0513 00:23:43.705459 2092 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:23:43.705704 kubelet[2092]: I0513 00:23:43.705690 2092 factory.go:221] Registration of the containerd container factory successfully May 13 00:23:43.706808 kubelet[2092]: E0513 00:23:43.706773 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="200ms" May 13 00:23:43.716560 kubelet[2092]: I0513 00:23:43.716539 2092 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:23:43.716560 kubelet[2092]: I0513 00:23:43.716555 2092 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:23:43.716666 kubelet[2092]: I0513 00:23:43.716573 2092 state_mem.go:36] "Initialized new in-memory state store" May 13 00:23:43.716929 kubelet[2092]: I0513 00:23:43.716896 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:23:43.717992 kubelet[2092]: I0513 00:23:43.717955 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:23:43.717992 kubelet[2092]: I0513 00:23:43.717982 2092 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:23:43.718081 kubelet[2092]: I0513 00:23:43.718002 2092 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:23:43.718081 kubelet[2092]: I0513 00:23:43.718010 2092 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:23:43.718081 kubelet[2092]: E0513 00:23:43.718048 2092 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:23:43.718500 kubelet[2092]: W0513 00:23:43.718374 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:43.718500 kubelet[2092]: E0513 00:23:43.718414 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:43.784578 kubelet[2092]: I0513 00:23:43.784548 2092 policy_none.go:49] "None policy: Start" May 13 00:23:43.784578 kubelet[2092]: I0513 00:23:43.784577 2092 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:23:43.784578 kubelet[2092]: I0513 00:23:43.784591 2092 state_mem.go:35] "Initializing new in-memory state store" May 13 00:23:43.789632 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:23:43.801994 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:23:43.803126 kubelet[2092]: E0513 00:23:43.803099 2092 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:23:43.804732 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:23:43.816113 kubelet[2092]: I0513 00:23:43.816091 2092 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:23:43.816983 kubelet[2092]: I0513 00:23:43.816545 2092 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:23:43.816983 kubelet[2092]: I0513 00:23:43.816563 2092 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:23:43.816983 kubelet[2092]: I0513 00:23:43.816906 2092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:23:43.817876 kubelet[2092]: E0513 00:23:43.817852 2092 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:23:43.817936 kubelet[2092]: E0513 00:23:43.817909 2092 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:23:43.824360 systemd[1]: Created slice kubepods-burstable-pod90cb9c92498a85665aa7a61af53ef7ff.slice - libcontainer container kubepods-burstable-pod90cb9c92498a85665aa7a61af53ef7ff.slice. May 13 00:23:43.846756 kubelet[2092]: E0513 00:23:43.846564 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:43.849805 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 13 00:23:43.851576 kubelet[2092]: E0513 00:23:43.851546 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:43.852797 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 13 00:23:43.854373 kubelet[2092]: E0513 00:23:43.854337 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:43.904788 kubelet[2092]: I0513 00:23:43.904690 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90cb9c92498a85665aa7a61af53ef7ff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90cb9c92498a85665aa7a61af53ef7ff\") " pod="kube-system/kube-apiserver-localhost" May 13 00:23:43.904788 kubelet[2092]: I0513 00:23:43.904727 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:43.904788 kubelet[2092]: I0513 00:23:43.904748 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:43.904788 kubelet[2092]: I0513 00:23:43.904764 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:43.904788 kubelet[2092]: I0513 00:23:43.904780 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:23:43.904945 kubelet[2092]: I0513 00:23:43.904794 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90cb9c92498a85665aa7a61af53ef7ff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90cb9c92498a85665aa7a61af53ef7ff\") " pod="kube-system/kube-apiserver-localhost" May 13 00:23:43.904945 kubelet[2092]: I0513 00:23:43.904810 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90cb9c92498a85665aa7a61af53ef7ff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90cb9c92498a85665aa7a61af53ef7ff\") " pod="kube-system/kube-apiserver-localhost" May 13 00:23:43.904945 kubelet[2092]: I0513 00:23:43.904826 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:43.904945 kubelet[2092]: I0513 00:23:43.904840 2092 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:43.908185 kubelet[2092]: E0513 00:23:43.908141 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="400ms" May 13 00:23:43.918303 kubelet[2092]: I0513 00:23:43.918280 2092 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:23:43.918663 kubelet[2092]: E0513 00:23:43.918640 2092 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" May 13 00:23:44.119826 kubelet[2092]: I0513 00:23:44.119798 2092 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:23:44.120163 kubelet[2092]: E0513 00:23:44.120137 2092 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" May 13 00:23:44.147772 kubelet[2092]: E0513 00:23:44.147734 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:44.148385 containerd[1434]: time="2025-05-13T00:23:44.148333344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90cb9c92498a85665aa7a61af53ef7ff,Namespace:kube-system,Attempt:0,}" May 13 00:23:44.152646 kubelet[2092]: E0513 00:23:44.152612 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:44.153058 containerd[1434]: time="2025-05-13T00:23:44.153018954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 13 00:23:44.155592 kubelet[2092]: E0513 00:23:44.155512 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:44.156140 containerd[1434]: time="2025-05-13T00:23:44.156103517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 13 00:23:44.308669 kubelet[2092]: E0513 00:23:44.308603 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="800ms" May 13 00:23:44.522212 kubelet[2092]: I0513 00:23:44.522118 2092 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:23:44.522521 kubelet[2092]: E0513 00:23:44.522472 2092 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.75:6443/api/v1/nodes\": dial tcp 10.0.0.75:6443: connect: connection refused" node="localhost" May 13 00:23:44.659856 kubelet[2092]: W0513 00:23:44.659787 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:44.659856 kubelet[2092]: E0513 00:23:44.659856 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.75:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:44.675547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057175657.mount: Deactivated successfully. May 13 00:23:44.679584 containerd[1434]: time="2025-05-13T00:23:44.679546616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:23:44.681426 containerd[1434]: time="2025-05-13T00:23:44.681374233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:23:44.682229 containerd[1434]: time="2025-05-13T00:23:44.682125725Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:23:44.683403 containerd[1434]: time="2025-05-13T00:23:44.683371976Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:23:44.684339 containerd[1434]: time="2025-05-13T00:23:44.684056491Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:23:44.684339 containerd[1434]: time="2025-05-13T00:23:44.684215282Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:23:44.685146 containerd[1434]: time="2025-05-13T00:23:44.685120037Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:23:44.686875 containerd[1434]: time="2025-05-13T00:23:44.686847027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:23:44.688921 containerd[1434]: time="2025-05-13T00:23:44.688861274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.695427ms" May 13 00:23:44.690655 containerd[1434]: time="2025-05-13T00:23:44.690626360Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 537.455625ms" May 13 00:23:44.691077 containerd[1434]: time="2025-05-13T00:23:44.691048333Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.629104ms" May 13 00:23:44.817996 kubelet[2092]: W0513 00:23:44.817131 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:44.817996 kubelet[2092]: E0513 00:23:44.817206 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.75:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:44.837960 containerd[1434]: time="2025-05-13T00:23:44.837838560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:44.838311 containerd[1434]: time="2025-05-13T00:23:44.837899849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:44.838311 containerd[1434]: time="2025-05-13T00:23:44.838029798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:44.838311 containerd[1434]: time="2025-05-13T00:23:44.838216790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:44.840397 containerd[1434]: time="2025-05-13T00:23:44.840300698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:44.840397 containerd[1434]: time="2025-05-13T00:23:44.840350371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:44.840511 containerd[1434]: time="2025-05-13T00:23:44.840382698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:44.840756 containerd[1434]: time="2025-05-13T00:23:44.840652930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:44.843120 containerd[1434]: time="2025-05-13T00:23:44.842892706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:44.843120 containerd[1434]: time="2025-05-13T00:23:44.842949749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:44.843120 containerd[1434]: time="2025-05-13T00:23:44.842967935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:44.843120 containerd[1434]: time="2025-05-13T00:23:44.843039880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:44.861631 systemd[1]: Started cri-containerd-7293a5a00329e7deb66058e81877e4ec02eb4742fc808ceb4990565a5751b3a3.scope - libcontainer container 7293a5a00329e7deb66058e81877e4ec02eb4742fc808ceb4990565a5751b3a3. May 13 00:23:44.865467 systemd[1]: Started cri-containerd-1885c57e48bbde73fe35d0bf2a76efcb37b3de4b2278c9d0812aac8678c4bc24.scope - libcontainer container 1885c57e48bbde73fe35d0bf2a76efcb37b3de4b2278c9d0812aac8678c4bc24. May 13 00:23:44.867539 systemd[1]: Started cri-containerd-64d5c434c9a0203be385657b09ef60bbcf8001b56b1d33ec82acd838ca611439.scope - libcontainer container 64d5c434c9a0203be385657b09ef60bbcf8001b56b1d33ec82acd838ca611439. May 13 00:23:44.893302 containerd[1434]: time="2025-05-13T00:23:44.893229826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"7293a5a00329e7deb66058e81877e4ec02eb4742fc808ceb4990565a5751b3a3\"" May 13 00:23:44.894451 kubelet[2092]: E0513 00:23:44.894394 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:44.896764 containerd[1434]: time="2025-05-13T00:23:44.896673511Z" level=info msg="CreateContainer within sandbox \"7293a5a00329e7deb66058e81877e4ec02eb4742fc808ceb4990565a5751b3a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:23:44.897222 containerd[1434]: time="2025-05-13T00:23:44.896735161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"1885c57e48bbde73fe35d0bf2a76efcb37b3de4b2278c9d0812aac8678c4bc24\"" May 13 00:23:44.897865 kubelet[2092]: E0513 00:23:44.897808 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:44.900251 containerd[1434]: time="2025-05-13T00:23:44.900158697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:90cb9c92498a85665aa7a61af53ef7ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"64d5c434c9a0203be385657b09ef60bbcf8001b56b1d33ec82acd838ca611439\"" May 13 00:23:44.900251 containerd[1434]: time="2025-05-13T00:23:44.900167950Z" level=info msg="CreateContainer within sandbox \"1885c57e48bbde73fe35d0bf2a76efcb37b3de4b2278c9d0812aac8678c4bc24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:23:44.901056 kubelet[2092]: E0513 00:23:44.901022 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:44.902500 containerd[1434]: time="2025-05-13T00:23:44.902472299Z" level=info msg="CreateContainer within sandbox \"64d5c434c9a0203be385657b09ef60bbcf8001b56b1d33ec82acd838ca611439\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:23:44.916885 containerd[1434]: time="2025-05-13T00:23:44.916796118Z" level=info msg="CreateContainer within sandbox \"7293a5a00329e7deb66058e81877e4ec02eb4742fc808ceb4990565a5751b3a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7586ddcd18484e0c029e1b059b8674a368b479c5528bfb1fc1b00bf6d555e3ac\"" May 13 00:23:44.917462 containerd[1434]: time="2025-05-13T00:23:44.917429719Z" level=info msg="StartContainer for \"7586ddcd18484e0c029e1b059b8674a368b479c5528bfb1fc1b00bf6d555e3ac\"" May 13 00:23:44.920475 containerd[1434]: time="2025-05-13T00:23:44.920408889Z" level=info msg="CreateContainer within sandbox \"1885c57e48bbde73fe35d0bf2a76efcb37b3de4b2278c9d0812aac8678c4bc24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a71a68949bf52761951cda9d87d59ff2da09f97650492d5989f4a878041191f\"" May 13 00:23:44.920892 containerd[1434]: time="2025-05-13T00:23:44.920865432Z" level=info msg="StartContainer for \"7a71a68949bf52761951cda9d87d59ff2da09f97650492d5989f4a878041191f\"" May 13 00:23:44.921704 containerd[1434]: time="2025-05-13T00:23:44.921610876Z" level=info msg="CreateContainer within sandbox \"64d5c434c9a0203be385657b09ef60bbcf8001b56b1d33ec82acd838ca611439\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4f0b18e3a2c61687a80294d419529b917470f07789a123a216eeb8d93e4fffaa\"" May 13 00:23:44.922123 containerd[1434]: time="2025-05-13T00:23:44.922096702Z" level=info msg="StartContainer for \"4f0b18e3a2c61687a80294d419529b917470f07789a123a216eeb8d93e4fffaa\"" May 13 00:23:44.939619 systemd[1]: Started cri-containerd-7586ddcd18484e0c029e1b059b8674a368b479c5528bfb1fc1b00bf6d555e3ac.scope - libcontainer container 7586ddcd18484e0c029e1b059b8674a368b479c5528bfb1fc1b00bf6d555e3ac. May 13 00:23:44.943543 systemd[1]: Started cri-containerd-7a71a68949bf52761951cda9d87d59ff2da09f97650492d5989f4a878041191f.scope - libcontainer container 7a71a68949bf52761951cda9d87d59ff2da09f97650492d5989f4a878041191f. May 13 00:23:44.948681 systemd[1]: Started cri-containerd-4f0b18e3a2c61687a80294d419529b917470f07789a123a216eeb8d93e4fffaa.scope - libcontainer container 4f0b18e3a2c61687a80294d419529b917470f07789a123a216eeb8d93e4fffaa. May 13 00:23:44.974230 containerd[1434]: time="2025-05-13T00:23:44.973981872Z" level=info msg="StartContainer for \"7586ddcd18484e0c029e1b059b8674a368b479c5528bfb1fc1b00bf6d555e3ac\" returns successfully" May 13 00:23:45.010019 containerd[1434]: time="2025-05-13T00:23:45.009860429Z" level=info msg="StartContainer for \"7a71a68949bf52761951cda9d87d59ff2da09f97650492d5989f4a878041191f\" returns successfully" May 13 00:23:45.010019 containerd[1434]: time="2025-05-13T00:23:45.009979260Z" level=info msg="StartContainer for \"4f0b18e3a2c61687a80294d419529b917470f07789a123a216eeb8d93e4fffaa\" returns successfully" May 13 00:23:45.106061 kubelet[2092]: W0513 00:23:45.103087 2092 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.75:6443: connect: connection refused May 13 00:23:45.106061 kubelet[2092]: E0513 00:23:45.103157 2092 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.75:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.75:6443: connect: connection refused" logger="UnhandledError" May 13 00:23:45.109971 kubelet[2092]: E0513 00:23:45.109705 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.75:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.75:6443: connect: connection refused" interval="1.6s" May 13 00:23:45.324610 kubelet[2092]: I0513 00:23:45.324292 2092 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:23:45.727051 kubelet[2092]: E0513 00:23:45.726859 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:45.727051 kubelet[2092]: E0513 00:23:45.726973 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:45.729791 kubelet[2092]: E0513 00:23:45.729584 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:45.729791 kubelet[2092]: E0513 00:23:45.729685 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:45.732204 kubelet[2092]: E0513 00:23:45.732071 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:45.732204 kubelet[2092]: E0513 00:23:45.732164 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:46.734479 kubelet[2092]: E0513 00:23:46.734289 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:46.734479 kubelet[2092]: E0513 00:23:46.734381 2092 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 13 00:23:46.734479 kubelet[2092]: E0513 00:23:46.734404 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:46.734823 kubelet[2092]: E0513 00:23:46.734499 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:47.199416 kubelet[2092]: E0513 00:23:47.199303 2092 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 13 00:23:47.266633 kubelet[2092]: I0513 00:23:47.266595 2092 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:23:47.266633 kubelet[2092]: E0513 00:23:47.266635 2092 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 13 00:23:47.269497 kubelet[2092]: E0513 00:23:47.269462 2092 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:23:47.312935 kubelet[2092]: E0513 00:23:47.312828 2092 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.183eee6527e1cd34 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:23:43.694548276 +0000 UTC m=+1.595455564,LastTimestamp:2025-05-13 00:23:43.694548276 +0000 UTC m=+1.595455564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:23:47.369576 kubelet[2092]: E0513 00:23:47.369547 2092 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:23:47.404362 kubelet[2092]: I0513 00:23:47.404323 2092 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:23:47.415424 kubelet[2092]: E0513 00:23:47.415363 2092 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:23:47.415424 kubelet[2092]: I0513 00:23:47.415403 2092 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:23:47.417366 kubelet[2092]: E0513 00:23:47.417329 2092 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:23:47.417366 kubelet[2092]: I0513 00:23:47.417360 2092 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:23:47.419085 kubelet[2092]: E0513 00:23:47.419025 2092 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:23:47.693714 kubelet[2092]: I0513 00:23:47.693595 2092 apiserver.go:52] "Watching apiserver" May 13 00:23:47.704164 kubelet[2092]: I0513 00:23:47.704106 2092 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:23:47.734887 kubelet[2092]: I0513 00:23:47.734861 2092 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:23:47.735353 kubelet[2092]: I0513 00:23:47.734941 2092 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:23:47.736792 kubelet[2092]: E0513 00:23:47.736758 2092 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:23:47.736792 kubelet[2092]: E0513 00:23:47.736761 2092 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 13 00:23:47.736931 kubelet[2092]: E0513 00:23:47.736905 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:47.736931 kubelet[2092]: E0513 00:23:47.736922 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:49.049129 systemd[1]: Reloading requested from client PID 2376 ('systemctl') (unit session-7.scope)... May 13 00:23:49.049144 systemd[1]: Reloading... May 13 00:23:49.114516 zram_generator::config[2418]: No configuration found. May 13 00:23:49.267118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:23:49.332760 systemd[1]: Reloading finished in 283 ms. May 13 00:23:49.365224 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:23:49.379458 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:23:49.379715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:49.379769 systemd[1]: kubelet.service: Consumed 2.018s CPU time, 122.8M memory peak, 0B memory swap peak. May 13 00:23:49.391865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:23:49.502910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:23:49.507931 (kubelet)[2457]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:23:49.545023 kubelet[2457]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:23:49.545023 kubelet[2457]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 13 00:23:49.545023 kubelet[2457]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:23:49.545517 kubelet[2457]: I0513 00:23:49.545050 2457 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:23:49.552630 kubelet[2457]: I0513 00:23:49.552522 2457 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 13 00:23:49.552630 kubelet[2457]: I0513 00:23:49.552553 2457 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:23:49.552859 kubelet[2457]: I0513 00:23:49.552796 2457 server.go:954] "Client rotation is on, will bootstrap in background" May 13 00:23:49.554039 kubelet[2457]: I0513 00:23:49.554017 2457 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:23:49.556663 kubelet[2457]: I0513 00:23:49.556622 2457 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:23:49.559686 kubelet[2457]: E0513 00:23:49.559636 2457 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:23:49.560006 kubelet[2457]: I0513 00:23:49.559793 2457 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:23:49.562348 kubelet[2457]: I0513 00:23:49.562325 2457 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:23:49.562712 kubelet[2457]: I0513 00:23:49.562682 2457 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:23:49.562960 kubelet[2457]: I0513 00:23:49.562786 2457 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:23:49.563079 kubelet[2457]: I0513 00:23:49.563065 2457 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:23:49.563133 kubelet[2457]: I0513 00:23:49.563125 2457 container_manager_linux.go:304] "Creating device plugin manager" May 13 00:23:49.563245 kubelet[2457]: I0513 00:23:49.563234 2457 state_mem.go:36] "Initialized new in-memory state store" May 13 00:23:49.563472 kubelet[2457]: I0513 00:23:49.563457 2457 kubelet.go:446] "Attempting to sync node with API server" May 13 00:23:49.563552 kubelet[2457]: I0513 00:23:49.563540 2457 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:23:49.563609 kubelet[2457]: I0513 00:23:49.563601 2457 kubelet.go:352] "Adding apiserver pod source" May 13 00:23:49.563658 kubelet[2457]: I0513 00:23:49.563649 2457 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:23:49.564831 kubelet[2457]: I0513 00:23:49.564364 2457 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:23:49.565589 kubelet[2457]: I0513 00:23:49.565565 2457 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:23:49.567279 kubelet[2457]: I0513 00:23:49.566205 2457 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 13 00:23:49.567279 kubelet[2457]: I0513 00:23:49.566255 2457 server.go:1287] "Started kubelet" May 13 00:23:49.567279 kubelet[2457]: I0513 00:23:49.566740 2457 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:23:49.567279 kubelet[2457]: I0513 00:23:49.566828 2457 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:23:49.567279 kubelet[2457]: I0513 00:23:49.567072 2457 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:23:49.568053 kubelet[2457]: I0513 00:23:49.568025 2457 server.go:490] "Adding debug handlers to kubelet server" May 13 00:23:49.570153 kubelet[2457]: I0513 00:23:49.570123 2457 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:23:49.575575 kubelet[2457]: I0513 00:23:49.573398 2457 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:23:49.575898 kubelet[2457]: I0513 00:23:49.575866 2457 volume_manager.go:297] "Starting Kubelet Volume Manager" May 13 00:23:49.576129 kubelet[2457]: E0513 00:23:49.576100 2457 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:23:49.576250 kubelet[2457]: I0513 00:23:49.576236 2457 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:23:49.576383 kubelet[2457]: I0513 00:23:49.576368 2457 reconciler.go:26] "Reconciler: start to sync state" May 13 00:23:49.590487 kubelet[2457]: I0513 00:23:49.589617 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:23:49.590487 kubelet[2457]: I0513 00:23:49.590204 2457 factory.go:221] Registration of the systemd container factory successfully May 13 00:23:49.590487 kubelet[2457]: I0513 00:23:49.590322 2457 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:23:49.591688 kubelet[2457]: I0513 00:23:49.590837 2457 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:23:49.591688 kubelet[2457]: I0513 00:23:49.590856 2457 status_manager.go:227] "Starting to sync pod status with apiserver" May 13 00:23:49.591688 kubelet[2457]: I0513 00:23:49.590874 2457 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 13 00:23:49.591688 kubelet[2457]: I0513 00:23:49.590881 2457 kubelet.go:2388] "Starting kubelet main sync loop" May 13 00:23:49.591688 kubelet[2457]: E0513 00:23:49.590919 2457 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:23:49.591794 kubelet[2457]: E0513 00:23:49.591730 2457 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:23:49.595532 kubelet[2457]: I0513 00:23:49.595271 2457 factory.go:221] Registration of the containerd container factory successfully May 13 00:23:49.624773 kubelet[2457]: I0513 00:23:49.624747 2457 cpu_manager.go:221] "Starting CPU manager" policy="none" May 13 00:23:49.624773 kubelet[2457]: I0513 00:23:49.624767 2457 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 13 00:23:49.624914 kubelet[2457]: I0513 00:23:49.624787 2457 state_mem.go:36] "Initialized new in-memory state store" May 13 00:23:49.624948 kubelet[2457]: I0513 00:23:49.624938 2457 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:23:49.624974 kubelet[2457]: I0513 00:23:49.624948 2457 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:23:49.624974 kubelet[2457]: I0513 00:23:49.624965 2457 policy_none.go:49] "None policy: Start" May 13 00:23:49.624974 kubelet[2457]: I0513 00:23:49.624974 2457 memory_manager.go:186] "Starting memorymanager" policy="None" May 13 00:23:49.625032 kubelet[2457]: I0513 00:23:49.624982 2457 state_mem.go:35] "Initializing new in-memory state store" May 13 00:23:49.625101 kubelet[2457]: I0513 00:23:49.625088 2457 state_mem.go:75] "Updated machine memory state" May 13 00:23:49.628572 kubelet[2457]: I0513 00:23:49.628541 2457 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:23:49.628811 kubelet[2457]: I0513 00:23:49.628698 2457 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:23:49.628811 kubelet[2457]: I0513 00:23:49.628717 2457 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:23:49.629099 kubelet[2457]: I0513 00:23:49.628987 2457 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:23:49.629860 kubelet[2457]: E0513 00:23:49.629836 2457 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 13 00:23:49.691839 kubelet[2457]: I0513 00:23:49.691787 2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:23:49.691971 kubelet[2457]: I0513 00:23:49.691787 2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:23:49.692015 kubelet[2457]: I0513 00:23:49.691896 2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 13 00:23:49.732290 kubelet[2457]: I0513 00:23:49.732264 2457 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 13 00:23:49.739549 kubelet[2457]: I0513 00:23:49.739452 2457 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 13 00:23:49.739549 kubelet[2457]: I0513 00:23:49.739530 2457 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 13 00:23:49.777928 kubelet[2457]: I0513 00:23:49.777880 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:49.777928 kubelet[2457]: I0513 00:23:49.777926 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/90cb9c92498a85665aa7a61af53ef7ff-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"90cb9c92498a85665aa7a61af53ef7ff\") " pod="kube-system/kube-apiserver-localhost" May 13 00:23:49.778091 kubelet[2457]: I0513 00:23:49.777948 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/90cb9c92498a85665aa7a61af53ef7ff-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"90cb9c92498a85665aa7a61af53ef7ff\") " pod="kube-system/kube-apiserver-localhost" May 13 00:23:49.778091 kubelet[2457]: I0513 00:23:49.777968 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:49.778091 kubelet[2457]: I0513 00:23:49.777984 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 13 00:23:49.778091 kubelet[2457]: I0513 00:23:49.777999 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/90cb9c92498a85665aa7a61af53ef7ff-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"90cb9c92498a85665aa7a61af53ef7ff\") " pod="kube-system/kube-apiserver-localhost" May 13 00:23:49.778091 kubelet[2457]: I0513 00:23:49.778016 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:49.778194 kubelet[2457]: I0513 00:23:49.778040 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:49.778194 kubelet[2457]: I0513 00:23:49.778056 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:23:49.997323 kubelet[2457]: E0513 00:23:49.997201 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:49.998205 kubelet[2457]: E0513 00:23:49.998179 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:49.998315 kubelet[2457]: E0513 00:23:49.998293 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:50.107559 sudo[2496]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:23:50.107842 sudo[2496]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 00:23:50.556940 sudo[2496]: pam_unix(sudo:session): session closed for user root May 13 00:23:50.564655 kubelet[2457]: I0513 00:23:50.564619 2457 apiserver.go:52] "Watching apiserver" May 13 00:23:50.576512 kubelet[2457]: I0513 00:23:50.576462 2457 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:23:50.605456 kubelet[2457]: I0513 00:23:50.605311 2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 13 00:23:50.605729 kubelet[2457]: I0513 00:23:50.605515 2457 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 13 00:23:50.607199 kubelet[2457]: E0513 00:23:50.605859 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:50.612533 kubelet[2457]: E0513 00:23:50.612501 2457 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:23:50.612692 kubelet[2457]: E0513 00:23:50.612668 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:50.613824 kubelet[2457]: E0513 00:23:50.613083 2457 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 13 00:23:50.613824 kubelet[2457]: E0513 00:23:50.613209 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:50.640127 kubelet[2457]: I0513 00:23:50.640048 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.640030458 podStartE2EDuration="1.640030458s" podCreationTimestamp="2025-05-13 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:50.632995705 +0000 UTC m=+1.121635190" watchObservedRunningTime="2025-05-13 00:23:50.640030458 +0000 UTC m=+1.128669943" May 13 00:23:50.647424 kubelet[2457]: I0513 00:23:50.647376 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.647362124 podStartE2EDuration="1.647362124s" podCreationTimestamp="2025-05-13 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:50.640407464 +0000 UTC m=+1.129047029" watchObservedRunningTime="2025-05-13 00:23:50.647362124 +0000 UTC m=+1.136001609" May 13 00:23:50.656072 kubelet[2457]: I0513 00:23:50.656005 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.655985833 podStartE2EDuration="1.655985833s" podCreationTimestamp="2025-05-13 00:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:50.647656596 +0000 UTC m=+1.136296041" watchObservedRunningTime="2025-05-13 00:23:50.655985833 +0000 UTC m=+1.144625278" May 13 00:23:51.607739 kubelet[2457]: E0513 00:23:51.607281 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:51.607739 kubelet[2457]: E0513 00:23:51.607451 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:52.608366 kubelet[2457]: E0513 00:23:52.608303 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:52.827143 sudo[1614]: pam_unix(sudo:session): session closed for user root May 13 00:23:52.828887 sshd[1611]: pam_unix(sshd:session): session closed for user core May 13 00:23:52.832377 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. May 13 00:23:52.832701 systemd[1]: sshd@6-10.0.0.75:22-10.0.0.1:51858.service: Deactivated successfully. May 13 00:23:52.834272 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:23:52.835089 systemd[1]: session-7.scope: Consumed 8.708s CPU time, 154.4M memory peak, 0B memory swap peak. May 13 00:23:52.835820 systemd-logind[1422]: Removed session 7. May 13 00:23:54.863430 kubelet[2457]: I0513 00:23:54.863397 2457 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:23:54.866821 containerd[1434]: time="2025-05-13T00:23:54.866671623Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:23:54.867163 kubelet[2457]: I0513 00:23:54.866911 2457 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:23:55.523349 kubelet[2457]: W0513 00:23:55.523316 2457 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 00:23:55.523541 kubelet[2457]: E0513 00:23:55.523358 2457 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 00:23:55.523757 kubelet[2457]: W0513 00:23:55.523726 2457 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 13 00:23:55.524318 kubelet[2457]: E0513 00:23:55.523760 2457 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 13 00:23:55.530627 systemd[1]: Created slice kubepods-besteffort-podc65671ad_a522_4428_b710_6fc567abca4f.slice - libcontainer container kubepods-besteffort-podc65671ad_a522_4428_b710_6fc567abca4f.slice. May 13 00:23:55.559342 systemd[1]: Created slice kubepods-burstable-podedf65f11_7682_4b4e_aa0f_28ac07d5b993.slice - libcontainer container kubepods-burstable-podedf65f11_7682_4b4e_aa0f_28ac07d5b993.slice. May 13 00:23:55.617481 kubelet[2457]: I0513 00:23:55.617378 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-net\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617481 kubelet[2457]: I0513 00:23:55.617420 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-kernel\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617481 kubelet[2457]: I0513 00:23:55.617450 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphq4\" (UniqueName: \"kubernetes.io/projected/c65671ad-a522-4428-b710-6fc567abca4f-kube-api-access-rphq4\") pod \"kube-proxy-dn7n2\" (UID: \"c65671ad-a522-4428-b710-6fc567abca4f\") " pod="kube-system/kube-proxy-dn7n2" May 13 00:23:55.617481 kubelet[2457]: I0513 00:23:55.617473 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hubble-tls\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617481 kubelet[2457]: I0513 00:23:55.617490 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c65671ad-a522-4428-b710-6fc567abca4f-kube-proxy\") pod \"kube-proxy-dn7n2\" (UID: \"c65671ad-a522-4428-b710-6fc567abca4f\") " pod="kube-system/kube-proxy-dn7n2" May 13 00:23:55.617814 kubelet[2457]: I0513 00:23:55.617504 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hostproc\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617814 kubelet[2457]: I0513 00:23:55.617518 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cni-path\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617814 kubelet[2457]: I0513 00:23:55.617535 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-lib-modules\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617814 kubelet[2457]: I0513 00:23:55.617550 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-xtables-lock\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617814 kubelet[2457]: I0513 00:23:55.617567 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c87dj\" (UniqueName: \"kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-kube-api-access-c87dj\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617814 kubelet[2457]: I0513 00:23:55.617583 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c65671ad-a522-4428-b710-6fc567abca4f-xtables-lock\") pod \"kube-proxy-dn7n2\" (UID: \"c65671ad-a522-4428-b710-6fc567abca4f\") " pod="kube-system/kube-proxy-dn7n2" May 13 00:23:55.617938 kubelet[2457]: I0513 00:23:55.617599 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-cgroup\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617938 kubelet[2457]: I0513 00:23:55.617613 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-etc-cni-netd\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617938 kubelet[2457]: I0513 00:23:55.617629 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edf65f11-7682-4b4e-aa0f-28ac07d5b993-clustermesh-secrets\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617938 kubelet[2457]: I0513 00:23:55.617647 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-run\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617938 kubelet[2457]: I0513 00:23:55.617673 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-bpf-maps\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.617938 kubelet[2457]: I0513 00:23:55.617690 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-config-path\") pod \"cilium-6vlcb\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " pod="kube-system/cilium-6vlcb" May 13 00:23:55.618062 kubelet[2457]: I0513 00:23:55.617706 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c65671ad-a522-4428-b710-6fc567abca4f-lib-modules\") pod \"kube-proxy-dn7n2\" (UID: \"c65671ad-a522-4428-b710-6fc567abca4f\") " pod="kube-system/kube-proxy-dn7n2" May 13 00:23:55.856064 kubelet[2457]: E0513 00:23:55.855948 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:55.856877 containerd[1434]: time="2025-05-13T00:23:55.856822400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dn7n2,Uid:c65671ad-a522-4428-b710-6fc567abca4f,Namespace:kube-system,Attempt:0,}" May 13 00:23:55.874879 containerd[1434]: time="2025-05-13T00:23:55.874767713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:55.874879 containerd[1434]: time="2025-05-13T00:23:55.874842827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:55.874879 containerd[1434]: time="2025-05-13T00:23:55.874857794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:55.875291 containerd[1434]: time="2025-05-13T00:23:55.874939071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:55.894661 systemd[1]: Started cri-containerd-9716fce01dad70f4f347b00f03a79ee5ccc1fabaed8294306c80c393b74d8b3b.scope - libcontainer container 9716fce01dad70f4f347b00f03a79ee5ccc1fabaed8294306c80c393b74d8b3b. May 13 00:23:55.912215 containerd[1434]: time="2025-05-13T00:23:55.912124564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dn7n2,Uid:c65671ad-a522-4428-b710-6fc567abca4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"9716fce01dad70f4f347b00f03a79ee5ccc1fabaed8294306c80c393b74d8b3b\"" May 13 00:23:55.912914 kubelet[2457]: E0513 00:23:55.912869 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:55.916190 containerd[1434]: time="2025-05-13T00:23:55.916095048Z" level=info msg="CreateContainer within sandbox \"9716fce01dad70f4f347b00f03a79ee5ccc1fabaed8294306c80c393b74d8b3b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:23:55.929535 containerd[1434]: time="2025-05-13T00:23:55.929491214Z" level=info msg="CreateContainer within sandbox \"9716fce01dad70f4f347b00f03a79ee5ccc1fabaed8294306c80c393b74d8b3b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a84d28eabcf56b190fd10299ddd2a6fdb977c6357fe016d978be970e048f8a81\"" May 13 00:23:55.930044 containerd[1434]: time="2025-05-13T00:23:55.930015613Z" level=info msg="StartContainer for \"a84d28eabcf56b190fd10299ddd2a6fdb977c6357fe016d978be970e048f8a81\"" May 13 00:23:55.952628 systemd[1]: Started cri-containerd-a84d28eabcf56b190fd10299ddd2a6fdb977c6357fe016d978be970e048f8a81.scope - libcontainer container a84d28eabcf56b190fd10299ddd2a6fdb977c6357fe016d978be970e048f8a81. May 13 00:23:55.983166 systemd[1]: Created slice kubepods-besteffort-pod85303fae_407f_44da_b06e_ee1188ca1697.slice - libcontainer container kubepods-besteffort-pod85303fae_407f_44da_b06e_ee1188ca1697.slice. May 13 00:23:56.005124 containerd[1434]: time="2025-05-13T00:23:56.003645703Z" level=info msg="StartContainer for \"a84d28eabcf56b190fd10299ddd2a6fdb977c6357fe016d978be970e048f8a81\" returns successfully" May 13 00:23:56.021601 kubelet[2457]: I0513 00:23:56.021534 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnrrv\" (UniqueName: \"kubernetes.io/projected/85303fae-407f-44da-b06e-ee1188ca1697-kube-api-access-fnrrv\") pod \"cilium-operator-6c4d7847fc-lhtgb\" (UID: \"85303fae-407f-44da-b06e-ee1188ca1697\") " pod="kube-system/cilium-operator-6c4d7847fc-lhtgb" May 13 00:23:56.021739 kubelet[2457]: I0513 00:23:56.021625 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85303fae-407f-44da-b06e-ee1188ca1697-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lhtgb\" (UID: \"85303fae-407f-44da-b06e-ee1188ca1697\") " pod="kube-system/cilium-operator-6c4d7847fc-lhtgb" May 13 00:23:56.287856 kubelet[2457]: E0513 00:23:56.287807 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:56.288321 containerd[1434]: time="2025-05-13T00:23:56.288260920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lhtgb,Uid:85303fae-407f-44da-b06e-ee1188ca1697,Namespace:kube-system,Attempt:0,}" May 13 00:23:56.308699 containerd[1434]: time="2025-05-13T00:23:56.308594488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:56.308699 containerd[1434]: time="2025-05-13T00:23:56.308655154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:56.308699 containerd[1434]: time="2025-05-13T00:23:56.308670401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:56.309105 containerd[1434]: time="2025-05-13T00:23:56.308763961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:56.327768 systemd[1]: Started cri-containerd-1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c.scope - libcontainer container 1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c. May 13 00:23:56.353902 containerd[1434]: time="2025-05-13T00:23:56.353845873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lhtgb,Uid:85303fae-407f-44da-b06e-ee1188ca1697,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c\"" May 13 00:23:56.354835 kubelet[2457]: E0513 00:23:56.354640 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:56.356799 containerd[1434]: time="2025-05-13T00:23:56.356733473Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:23:56.515147 kubelet[2457]: E0513 00:23:56.515107 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:56.616846 kubelet[2457]: E0513 00:23:56.616463 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:56.616846 kubelet[2457]: E0513 00:23:56.616625 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:56.626660 kubelet[2457]: I0513 00:23:56.626519 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dn7n2" podStartSLOduration=1.626501716 podStartE2EDuration="1.626501716s" podCreationTimestamp="2025-05-13 00:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:23:56.625964566 +0000 UTC m=+7.114604091" watchObservedRunningTime="2025-05-13 00:23:56.626501716 +0000 UTC m=+7.115141201" May 13 00:23:56.720409 kubelet[2457]: E0513 00:23:56.720276 2457 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 13 00:23:56.720409 kubelet[2457]: E0513 00:23:56.720310 2457 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-6vlcb: failed to sync secret cache: timed out waiting for the condition May 13 00:23:56.720409 kubelet[2457]: E0513 00:23:56.720379 2457 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hubble-tls podName:edf65f11-7682-4b4e-aa0f-28ac07d5b993 nodeName:}" failed. No retries permitted until 2025-05-13 00:23:57.220350803 +0000 UTC m=+7.708990288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hubble-tls") pod "cilium-6vlcb" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993") : failed to sync secret cache: timed out waiting for the condition May 13 00:23:56.763639 kubelet[2457]: E0513 00:23:56.763605 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:57.362752 kubelet[2457]: E0513 00:23:57.362697 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:57.363552 containerd[1434]: time="2025-05-13T00:23:57.363299086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vlcb,Uid:edf65f11-7682-4b4e-aa0f-28ac07d5b993,Namespace:kube-system,Attempt:0,}" May 13 00:23:57.391527 containerd[1434]: time="2025-05-13T00:23:57.391322097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:23:57.391527 containerd[1434]: time="2025-05-13T00:23:57.391370637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:23:57.391527 containerd[1434]: time="2025-05-13T00:23:57.391380961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:57.391527 containerd[1434]: time="2025-05-13T00:23:57.391483202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:23:57.411623 systemd[1]: Started cri-containerd-4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31.scope - libcontainer container 4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31. May 13 00:23:57.425435 kubelet[2457]: E0513 00:23:57.425373 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:57.437329 containerd[1434]: time="2025-05-13T00:23:57.437292591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6vlcb,Uid:edf65f11-7682-4b4e-aa0f-28ac07d5b993,Namespace:kube-system,Attempt:0,} returns sandbox id \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\"" May 13 00:23:57.438460 kubelet[2457]: E0513 00:23:57.438246 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:57.620974 kubelet[2457]: E0513 00:23:57.619895 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:57.620974 kubelet[2457]: E0513 00:23:57.620237 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:57.620974 kubelet[2457]: E0513 00:23:57.620429 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:58.142486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091922969.mount: Deactivated successfully. May 13 00:23:58.622153 kubelet[2457]: E0513 00:23:58.621773 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:59.160984 containerd[1434]: time="2025-05-13T00:23:59.160936772Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:59.162165 containerd[1434]: time="2025-05-13T00:23:59.162104155Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 00:23:59.163485 containerd[1434]: time="2025-05-13T00:23:59.162907127Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:23:59.164194 containerd[1434]: time="2025-05-13T00:23:59.164056424Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.80724884s" May 13 00:23:59.164194 containerd[1434]: time="2025-05-13T00:23:59.164098600Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:23:59.165138 containerd[1434]: time="2025-05-13T00:23:59.165035220Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:23:59.165971 containerd[1434]: time="2025-05-13T00:23:59.165933946Z" level=info msg="CreateContainer within sandbox \"1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:23:59.176462 containerd[1434]: time="2025-05-13T00:23:59.176347448Z" level=info msg="CreateContainer within sandbox \"1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\"" May 13 00:23:59.177035 containerd[1434]: time="2025-05-13T00:23:59.176991162Z" level=info msg="StartContainer for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\"" May 13 00:23:59.201596 systemd[1]: Started cri-containerd-5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737.scope - libcontainer container 5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737. May 13 00:23:59.221353 containerd[1434]: time="2025-05-13T00:23:59.221313417Z" level=info msg="StartContainer for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" returns successfully" May 13 00:23:59.646527 kubelet[2457]: E0513 00:23:59.646488 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:23:59.657603 kubelet[2457]: I0513 00:23:59.657541 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lhtgb" podStartSLOduration=1.848696999 podStartE2EDuration="4.657526023s" podCreationTimestamp="2025-05-13 00:23:55 +0000 UTC" firstStartedPulling="2025-05-13 00:23:56.356039575 +0000 UTC m=+6.844679060" lastFinishedPulling="2025-05-13 00:23:59.164868599 +0000 UTC m=+9.653508084" observedRunningTime="2025-05-13 00:23:59.657057653 +0000 UTC m=+10.145697098" watchObservedRunningTime="2025-05-13 00:23:59.657526023 +0000 UTC m=+10.146165508" May 13 00:24:00.640802 kubelet[2457]: E0513 00:24:00.640726 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:05.678925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774853973.mount: Deactivated successfully. May 13 00:24:06.119536 update_engine[1423]: I20250513 00:24:06.119470 1423 update_attempter.cc:509] Updating boot flags... May 13 00:24:06.152530 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2906) May 13 00:24:06.202505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2910) May 13 00:24:06.223567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2910) May 13 00:24:07.165376 containerd[1434]: time="2025-05-13T00:24:07.165233294Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:07.166726 containerd[1434]: time="2025-05-13T00:24:07.166658874Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 00:24:07.167619 containerd[1434]: time="2025-05-13T00:24:07.167575492Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:24:07.168978 containerd[1434]: time="2025-05-13T00:24:07.168941538Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.003868986s" May 13 00:24:07.169038 containerd[1434]: time="2025-05-13T00:24:07.168977907Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:24:07.178987 containerd[1434]: time="2025-05-13T00:24:07.178941081Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:24:07.200077 containerd[1434]: time="2025-05-13T00:24:07.200022144Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\"" May 13 00:24:07.200658 containerd[1434]: time="2025-05-13T00:24:07.200632769Z" level=info msg="StartContainer for \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\"" May 13 00:24:07.231658 systemd[1]: Started cri-containerd-ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9.scope - libcontainer container ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9. May 13 00:24:07.252585 containerd[1434]: time="2025-05-13T00:24:07.252538217Z" level=info msg="StartContainer for \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\" returns successfully" May 13 00:24:07.289160 systemd[1]: cri-containerd-ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9.scope: Deactivated successfully. May 13 00:24:07.420500 containerd[1434]: time="2025-05-13T00:24:07.416159725Z" level=info msg="shim disconnected" id=ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9 namespace=k8s.io May 13 00:24:07.420500 containerd[1434]: time="2025-05-13T00:24:07.420001320Z" level=warning msg="cleaning up after shim disconnected" id=ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9 namespace=k8s.io May 13 00:24:07.420500 containerd[1434]: time="2025-05-13T00:24:07.420018524Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:07.431397 containerd[1434]: time="2025-05-13T00:24:07.431342462Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:24:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:24:07.655843 kubelet[2457]: E0513 00:24:07.655792 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:07.659318 containerd[1434]: time="2025-05-13T00:24:07.659255329Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:24:07.685870 containerd[1434]: time="2025-05-13T00:24:07.685741960Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\"" May 13 00:24:07.686408 containerd[1434]: time="2025-05-13T00:24:07.686199269Z" level=info msg="StartContainer for \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\"" May 13 00:24:07.714648 systemd[1]: Started cri-containerd-f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd.scope - libcontainer container f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd. May 13 00:24:07.737187 containerd[1434]: time="2025-05-13T00:24:07.737056068Z" level=info msg="StartContainer for \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\" returns successfully" May 13 00:24:07.759194 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:24:07.759406 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:24:07.759486 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 00:24:07.768020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:24:07.768287 systemd[1]: cri-containerd-f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd.scope: Deactivated successfully. May 13 00:24:07.781155 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:24:07.784296 containerd[1434]: time="2025-05-13T00:24:07.784141767Z" level=info msg="shim disconnected" id=f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd namespace=k8s.io May 13 00:24:07.784296 containerd[1434]: time="2025-05-13T00:24:07.784196140Z" level=warning msg="cleaning up after shim disconnected" id=f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd namespace=k8s.io May 13 00:24:07.784296 containerd[1434]: time="2025-05-13T00:24:07.784206743Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:08.198176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9-rootfs.mount: Deactivated successfully. May 13 00:24:08.657668 kubelet[2457]: E0513 00:24:08.657340 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:08.659691 containerd[1434]: time="2025-05-13T00:24:08.659478294Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:24:08.675018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264770047.mount: Deactivated successfully. May 13 00:24:08.676744 containerd[1434]: time="2025-05-13T00:24:08.676695478Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\"" May 13 00:24:08.677838 containerd[1434]: time="2025-05-13T00:24:08.677794687Z" level=info msg="StartContainer for \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\"" May 13 00:24:08.709666 systemd[1]: Started cri-containerd-cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f.scope - libcontainer container cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f. May 13 00:24:08.735571 containerd[1434]: time="2025-05-13T00:24:08.735527937Z" level=info msg="StartContainer for \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\" returns successfully" May 13 00:24:08.746215 systemd[1]: cri-containerd-cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f.scope: Deactivated successfully. May 13 00:24:08.778105 containerd[1434]: time="2025-05-13T00:24:08.778039935Z" level=info msg="shim disconnected" id=cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f namespace=k8s.io May 13 00:24:08.778613 containerd[1434]: time="2025-05-13T00:24:08.778395016Z" level=warning msg="cleaning up after shim disconnected" id=cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f namespace=k8s.io May 13 00:24:08.778613 containerd[1434]: time="2025-05-13T00:24:08.778414980Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:09.197772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f-rootfs.mount: Deactivated successfully. May 13 00:24:09.661497 kubelet[2457]: E0513 00:24:09.660860 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:09.666616 containerd[1434]: time="2025-05-13T00:24:09.666507370Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:24:09.693407 containerd[1434]: time="2025-05-13T00:24:09.693296834Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\"" May 13 00:24:09.693903 containerd[1434]: time="2025-05-13T00:24:09.693878479Z" level=info msg="StartContainer for \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\"" May 13 00:24:09.724696 systemd[1]: Started cri-containerd-9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464.scope - libcontainer container 9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464. May 13 00:24:09.741562 systemd[1]: cri-containerd-9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464.scope: Deactivated successfully. May 13 00:24:09.743593 containerd[1434]: time="2025-05-13T00:24:09.743526158Z" level=info msg="StartContainer for \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\" returns successfully" May 13 00:24:09.760727 containerd[1434]: time="2025-05-13T00:24:09.760660978Z" level=info msg="shim disconnected" id=9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464 namespace=k8s.io May 13 00:24:09.760727 containerd[1434]: time="2025-05-13T00:24:09.760708948Z" level=warning msg="cleaning up after shim disconnected" id=9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464 namespace=k8s.io May 13 00:24:09.760727 containerd[1434]: time="2025-05-13T00:24:09.760718070Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:10.197851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464-rootfs.mount: Deactivated successfully. May 13 00:24:10.665447 kubelet[2457]: E0513 00:24:10.665275 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:10.668422 containerd[1434]: time="2025-05-13T00:24:10.668379350Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:24:10.685426 containerd[1434]: time="2025-05-13T00:24:10.685362844Z" level=info msg="CreateContainer within sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\"" May 13 00:24:10.685954 containerd[1434]: time="2025-05-13T00:24:10.685923280Z" level=info msg="StartContainer for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\"" May 13 00:24:10.713642 systemd[1]: Started cri-containerd-4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26.scope - libcontainer container 4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26. May 13 00:24:10.742731 containerd[1434]: time="2025-05-13T00:24:10.742665115Z" level=info msg="StartContainer for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" returns successfully" May 13 00:24:10.881063 kubelet[2457]: I0513 00:24:10.881018 2457 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 13 00:24:10.914000 systemd[1]: Created slice kubepods-burstable-pod1164f026_353c_4a69_be93_c8628c63f1e6.slice - libcontainer container kubepods-burstable-pod1164f026_353c_4a69_be93_c8628c63f1e6.slice. May 13 00:24:10.922304 systemd[1]: Created slice kubepods-burstable-pod073f4473_0342_43f2_8768_f76323327049.slice - libcontainer container kubepods-burstable-pod073f4473_0342_43f2_8768_f76323327049.slice. May 13 00:24:10.929184 kubelet[2457]: I0513 00:24:10.928929 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/073f4473-0342-43f2-8768-f76323327049-config-volume\") pod \"coredns-668d6bf9bc-87v4j\" (UID: \"073f4473-0342-43f2-8768-f76323327049\") " pod="kube-system/coredns-668d6bf9bc-87v4j" May 13 00:24:10.929184 kubelet[2457]: I0513 00:24:10.928973 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqpsp\" (UniqueName: \"kubernetes.io/projected/073f4473-0342-43f2-8768-f76323327049-kube-api-access-wqpsp\") pod \"coredns-668d6bf9bc-87v4j\" (UID: \"073f4473-0342-43f2-8768-f76323327049\") " pod="kube-system/coredns-668d6bf9bc-87v4j" May 13 00:24:10.929184 kubelet[2457]: I0513 00:24:10.928998 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1164f026-353c-4a69-be93-c8628c63f1e6-config-volume\") pod \"coredns-668d6bf9bc-xtkdr\" (UID: \"1164f026-353c-4a69-be93-c8628c63f1e6\") " pod="kube-system/coredns-668d6bf9bc-xtkdr" May 13 00:24:10.929184 kubelet[2457]: I0513 00:24:10.929016 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnbk\" (UniqueName: \"kubernetes.io/projected/1164f026-353c-4a69-be93-c8628c63f1e6-kube-api-access-xcnbk\") pod \"coredns-668d6bf9bc-xtkdr\" (UID: \"1164f026-353c-4a69-be93-c8628c63f1e6\") " pod="kube-system/coredns-668d6bf9bc-xtkdr" May 13 00:24:11.217620 kubelet[2457]: E0513 00:24:11.217475 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:11.218247 containerd[1434]: time="2025-05-13T00:24:11.218193653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtkdr,Uid:1164f026-353c-4a69-be93-c8628c63f1e6,Namespace:kube-system,Attempt:0,}" May 13 00:24:11.231905 kubelet[2457]: E0513 00:24:11.231866 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:11.232432 containerd[1434]: time="2025-05-13T00:24:11.232384478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87v4j,Uid:073f4473-0342-43f2-8768-f76323327049,Namespace:kube-system,Attempt:0,}" May 13 00:24:11.670469 kubelet[2457]: E0513 00:24:11.670294 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:12.672152 kubelet[2457]: E0513 00:24:12.672122 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:12.926942 systemd-networkd[1377]: cilium_host: Link UP May 13 00:24:12.927896 systemd-networkd[1377]: cilium_net: Link UP May 13 00:24:12.928091 systemd-networkd[1377]: cilium_net: Gained carrier May 13 00:24:12.928223 systemd-networkd[1377]: cilium_host: Gained carrier May 13 00:24:13.011726 systemd-networkd[1377]: cilium_vxlan: Link UP May 13 00:24:13.011734 systemd-networkd[1377]: cilium_vxlan: Gained carrier May 13 00:24:13.307491 kernel: NET: Registered PF_ALG protocol family May 13 00:24:13.400592 systemd-networkd[1377]: cilium_net: Gained IPv6LL May 13 00:24:13.423546 systemd-networkd[1377]: cilium_host: Gained IPv6LL May 13 00:24:13.676053 kubelet[2457]: E0513 00:24:13.675934 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:13.897851 systemd-networkd[1377]: lxc_health: Link UP May 13 00:24:13.906417 systemd-networkd[1377]: lxc_health: Gained carrier May 13 00:24:14.207613 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL May 13 00:24:14.341678 systemd-networkd[1377]: lxc7f5c5b4f0a99: Link UP May 13 00:24:14.345512 kernel: eth0: renamed from tmpd031c May 13 00:24:14.350803 systemd-networkd[1377]: lxc1d22b45ccae3: Link UP May 13 00:24:14.361527 kernel: eth0: renamed from tmp73e68 May 13 00:24:14.361731 systemd-networkd[1377]: lxc7f5c5b4f0a99: Gained carrier May 13 00:24:14.368522 systemd-networkd[1377]: lxc1d22b45ccae3: Gained carrier May 13 00:24:15.036278 kubelet[2457]: E0513 00:24:15.035889 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:15.419658 kubelet[2457]: I0513 00:24:15.419236 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6vlcb" podStartSLOduration=10.687063549 podStartE2EDuration="20.419218579s" podCreationTimestamp="2025-05-13 00:23:55 +0000 UTC" firstStartedPulling="2025-05-13 00:23:57.43918676 +0000 UTC m=+7.927826245" lastFinishedPulling="2025-05-13 00:24:07.17134179 +0000 UTC m=+17.659981275" observedRunningTime="2025-05-13 00:24:11.690050049 +0000 UTC m=+22.178689534" watchObservedRunningTime="2025-05-13 00:24:15.419218579 +0000 UTC m=+25.907858064" May 13 00:24:15.487657 systemd-networkd[1377]: lxc1d22b45ccae3: Gained IPv6LL May 13 00:24:15.679869 kubelet[2457]: E0513 00:24:15.679742 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:15.872672 systemd-networkd[1377]: lxc_health: Gained IPv6LL May 13 00:24:16.127699 systemd-networkd[1377]: lxc7f5c5b4f0a99: Gained IPv6LL May 13 00:24:16.681065 kubelet[2457]: E0513 00:24:16.681034 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:18.130463 containerd[1434]: time="2025-05-13T00:24:18.130017664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:24:18.130463 containerd[1434]: time="2025-05-13T00:24:18.130088914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:24:18.130463 containerd[1434]: time="2025-05-13T00:24:18.130111597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:18.130463 containerd[1434]: time="2025-05-13T00:24:18.130286303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:18.153006 containerd[1434]: time="2025-05-13T00:24:18.152878524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:24:18.153006 containerd[1434]: time="2025-05-13T00:24:18.152966016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:24:18.153006 containerd[1434]: time="2025-05-13T00:24:18.152982819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:18.153186 containerd[1434]: time="2025-05-13T00:24:18.153093755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:24:18.153079 systemd[1]: Started cri-containerd-73e6892be0fb2c3b7ce06dda2773c48d2bdd7455ca3bc6ea659b7020fba1721c.scope - libcontainer container 73e6892be0fb2c3b7ce06dda2773c48d2bdd7455ca3bc6ea659b7020fba1721c. May 13 00:24:18.170452 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:24:18.178665 systemd[1]: Started cri-containerd-d031cb12c7d5d6e5d0633e846834d8630eb4530f86da1e7429e425c0b429b647.scope - libcontainer container d031cb12c7d5d6e5d0633e846834d8630eb4530f86da1e7429e425c0b429b647. May 13 00:24:18.189412 containerd[1434]: time="2025-05-13T00:24:18.189369551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xtkdr,Uid:1164f026-353c-4a69-be93-c8628c63f1e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"73e6892be0fb2c3b7ce06dda2773c48d2bdd7455ca3bc6ea659b7020fba1721c\"" May 13 00:24:18.190370 kubelet[2457]: E0513 00:24:18.190335 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:18.193939 containerd[1434]: time="2025-05-13T00:24:18.193893924Z" level=info msg="CreateContainer within sandbox \"73e6892be0fb2c3b7ce06dda2773c48d2bdd7455ca3bc6ea659b7020fba1721c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:24:18.194841 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:24:18.212373 containerd[1434]: time="2025-05-13T00:24:18.212327105Z" level=info msg="CreateContainer within sandbox \"73e6892be0fb2c3b7ce06dda2773c48d2bdd7455ca3bc6ea659b7020fba1721c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5695cbe70e9000d1f51a19d72f23d1e07cb152a41f62dcb55ac936acf57c3b0a\"" May 13 00:24:18.213539 containerd[1434]: time="2025-05-13T00:24:18.213461749Z" level=info msg="StartContainer for \"5695cbe70e9000d1f51a19d72f23d1e07cb152a41f62dcb55ac936acf57c3b0a\"" May 13 00:24:18.213868 containerd[1434]: time="2025-05-13T00:24:18.213829842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-87v4j,Uid:073f4473-0342-43f2-8768-f76323327049,Namespace:kube-system,Attempt:0,} returns sandbox id \"d031cb12c7d5d6e5d0633e846834d8630eb4530f86da1e7429e425c0b429b647\"" May 13 00:24:18.216052 kubelet[2457]: E0513 00:24:18.215949 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:18.218804 containerd[1434]: time="2025-05-13T00:24:18.218637936Z" level=info msg="CreateContainer within sandbox \"d031cb12c7d5d6e5d0633e846834d8630eb4530f86da1e7429e425c0b429b647\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:24:18.231470 containerd[1434]: time="2025-05-13T00:24:18.231397458Z" level=info msg="CreateContainer within sandbox \"d031cb12c7d5d6e5d0633e846834d8630eb4530f86da1e7429e425c0b429b647\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"462fa7f08f997f7da4764e873cbe08d63d482fa345048bb1acf13e955c06eaf3\"" May 13 00:24:18.232078 containerd[1434]: time="2025-05-13T00:24:18.232045391Z" level=info msg="StartContainer for \"462fa7f08f997f7da4764e873cbe08d63d482fa345048bb1acf13e955c06eaf3\"" May 13 00:24:18.239647 systemd[1]: Started cri-containerd-5695cbe70e9000d1f51a19d72f23d1e07cb152a41f62dcb55ac936acf57c3b0a.scope - libcontainer container 5695cbe70e9000d1f51a19d72f23d1e07cb152a41f62dcb55ac936acf57c3b0a. May 13 00:24:18.263682 systemd[1]: Started cri-containerd-462fa7f08f997f7da4764e873cbe08d63d482fa345048bb1acf13e955c06eaf3.scope - libcontainer container 462fa7f08f997f7da4764e873cbe08d63d482fa345048bb1acf13e955c06eaf3. May 13 00:24:18.275870 containerd[1434]: time="2025-05-13T00:24:18.275814909Z" level=info msg="StartContainer for \"5695cbe70e9000d1f51a19d72f23d1e07cb152a41f62dcb55ac936acf57c3b0a\" returns successfully" May 13 00:24:18.308335 containerd[1434]: time="2025-05-13T00:24:18.308272514Z" level=info msg="StartContainer for \"462fa7f08f997f7da4764e873cbe08d63d482fa345048bb1acf13e955c06eaf3\" returns successfully" May 13 00:24:18.686812 kubelet[2457]: E0513 00:24:18.686634 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:18.689985 kubelet[2457]: E0513 00:24:18.689632 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:18.712291 kubelet[2457]: I0513 00:24:18.712223 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-87v4j" podStartSLOduration=23.7122031 podStartE2EDuration="23.7122031s" podCreationTimestamp="2025-05-13 00:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:24:18.700948755 +0000 UTC m=+29.189588240" watchObservedRunningTime="2025-05-13 00:24:18.7122031 +0000 UTC m=+29.200842585" May 13 00:24:18.727609 kubelet[2457]: I0513 00:24:18.727476 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xtkdr" podStartSLOduration=23.727455461 podStartE2EDuration="23.727455461s" podCreationTimestamp="2025-05-13 00:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:24:18.713771406 +0000 UTC m=+29.202410891" watchObservedRunningTime="2025-05-13 00:24:18.727455461 +0000 UTC m=+29.216094906" May 13 00:24:18.789311 systemd[1]: Started sshd@7-10.0.0.75:22-10.0.0.1:53410.service - OpenSSH per-connection server daemon (10.0.0.1:53410). May 13 00:24:18.827800 sshd[3863]: Accepted publickey for core from 10.0.0.1 port 53410 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:18.829548 sshd[3863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:18.833404 systemd-logind[1422]: New session 8 of user core. May 13 00:24:18.848684 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:24:18.972455 sshd[3863]: pam_unix(sshd:session): session closed for user core May 13 00:24:18.976639 systemd[1]: sshd@7-10.0.0.75:22-10.0.0.1:53410.service: Deactivated successfully. May 13 00:24:18.978835 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:24:18.979794 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. May 13 00:24:18.980615 systemd-logind[1422]: Removed session 8. May 13 00:24:19.692074 kubelet[2457]: E0513 00:24:19.691928 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:19.692074 kubelet[2457]: E0513 00:24:19.692001 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:20.694297 kubelet[2457]: E0513 00:24:20.694266 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:20.695130 kubelet[2457]: E0513 00:24:20.694329 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:24:23.984136 systemd[1]: Started sshd@8-10.0.0.75:22-10.0.0.1:43156.service - OpenSSH per-connection server daemon (10.0.0.1:43156). May 13 00:24:24.028423 sshd[3881]: Accepted publickey for core from 10.0.0.1 port 43156 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:24.031426 sshd[3881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:24.038696 systemd-logind[1422]: New session 9 of user core. May 13 00:24:24.046611 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:24:24.180114 sshd[3881]: pam_unix(sshd:session): session closed for user core May 13 00:24:24.183800 systemd[1]: sshd@8-10.0.0.75:22-10.0.0.1:43156.service: Deactivated successfully. May 13 00:24:24.185566 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:24:24.186291 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. May 13 00:24:24.187338 systemd-logind[1422]: Removed session 9. May 13 00:24:29.195064 systemd[1]: Started sshd@9-10.0.0.75:22-10.0.0.1:43158.service - OpenSSH per-connection server daemon (10.0.0.1:43158). May 13 00:24:29.242203 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 43158 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:29.243525 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:29.248025 systemd-logind[1422]: New session 10 of user core. May 13 00:24:29.258627 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:24:29.385365 sshd[3899]: pam_unix(sshd:session): session closed for user core May 13 00:24:29.393852 systemd[1]: sshd@9-10.0.0.75:22-10.0.0.1:43158.service: Deactivated successfully. May 13 00:24:29.397591 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:24:29.399109 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. May 13 00:24:29.412795 systemd[1]: Started sshd@10-10.0.0.75:22-10.0.0.1:43160.service - OpenSSH per-connection server daemon (10.0.0.1:43160). May 13 00:24:29.413763 systemd-logind[1422]: Removed session 10. May 13 00:24:29.446178 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 43160 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:29.449965 sshd[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:29.459935 systemd-logind[1422]: New session 11 of user core. May 13 00:24:29.470658 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:24:29.623615 sshd[3914]: pam_unix(sshd:session): session closed for user core May 13 00:24:29.632721 systemd[1]: sshd@10-10.0.0.75:22-10.0.0.1:43160.service: Deactivated successfully. May 13 00:24:29.636943 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:24:29.640272 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. May 13 00:24:29.655867 systemd[1]: Started sshd@11-10.0.0.75:22-10.0.0.1:43172.service - OpenSSH per-connection server daemon (10.0.0.1:43172). May 13 00:24:29.656916 systemd-logind[1422]: Removed session 11. May 13 00:24:29.685263 sshd[3927]: Accepted publickey for core from 10.0.0.1 port 43172 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:29.686479 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:29.690048 systemd-logind[1422]: New session 12 of user core. May 13 00:24:29.697632 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:24:29.811618 sshd[3927]: pam_unix(sshd:session): session closed for user core May 13 00:24:29.814885 systemd[1]: sshd@11-10.0.0.75:22-10.0.0.1:43172.service: Deactivated successfully. May 13 00:24:29.817891 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:24:29.818464 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. May 13 00:24:29.819216 systemd-logind[1422]: Removed session 12. May 13 00:24:34.825906 systemd[1]: Started sshd@12-10.0.0.75:22-10.0.0.1:46544.service - OpenSSH per-connection server daemon (10.0.0.1:46544). May 13 00:24:34.858365 sshd[3941]: Accepted publickey for core from 10.0.0.1 port 46544 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:34.859608 sshd[3941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:34.862996 systemd-logind[1422]: New session 13 of user core. May 13 00:24:34.879611 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:24:34.987188 sshd[3941]: pam_unix(sshd:session): session closed for user core May 13 00:24:34.990396 systemd[1]: sshd@12-10.0.0.75:22-10.0.0.1:46544.service: Deactivated successfully. May 13 00:24:34.991973 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:24:34.993870 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. May 13 00:24:34.994680 systemd-logind[1422]: Removed session 13. May 13 00:24:39.998977 systemd[1]: Started sshd@13-10.0.0.75:22-10.0.0.1:46552.service - OpenSSH per-connection server daemon (10.0.0.1:46552). May 13 00:24:40.031768 sshd[3955]: Accepted publickey for core from 10.0.0.1 port 46552 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:40.033567 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:40.037407 systemd-logind[1422]: New session 14 of user core. May 13 00:24:40.048591 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:24:40.164310 sshd[3955]: pam_unix(sshd:session): session closed for user core May 13 00:24:40.182123 systemd[1]: sshd@13-10.0.0.75:22-10.0.0.1:46552.service: Deactivated successfully. May 13 00:24:40.184261 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:24:40.185886 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. May 13 00:24:40.187544 systemd[1]: Started sshd@14-10.0.0.75:22-10.0.0.1:46566.service - OpenSSH per-connection server daemon (10.0.0.1:46566). May 13 00:24:40.188518 systemd-logind[1422]: Removed session 14. May 13 00:24:40.220653 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 46566 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:40.222034 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:40.226022 systemd-logind[1422]: New session 15 of user core. May 13 00:24:40.239592 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:24:40.462344 sshd[3969]: pam_unix(sshd:session): session closed for user core May 13 00:24:40.477846 systemd[1]: sshd@14-10.0.0.75:22-10.0.0.1:46566.service: Deactivated successfully. May 13 00:24:40.479213 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:24:40.480524 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. May 13 00:24:40.489716 systemd[1]: Started sshd@15-10.0.0.75:22-10.0.0.1:46582.service - OpenSSH per-connection server daemon (10.0.0.1:46582). May 13 00:24:40.490740 systemd-logind[1422]: Removed session 15. May 13 00:24:40.522509 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:40.523391 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:40.527490 systemd-logind[1422]: New session 16 of user core. May 13 00:24:40.542588 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:24:41.280293 sshd[3981]: pam_unix(sshd:session): session closed for user core May 13 00:24:41.290077 systemd[1]: sshd@15-10.0.0.75:22-10.0.0.1:46582.service: Deactivated successfully. May 13 00:24:41.294624 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:24:41.298832 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. May 13 00:24:41.309413 systemd[1]: Started sshd@16-10.0.0.75:22-10.0.0.1:46586.service - OpenSSH per-connection server daemon (10.0.0.1:46586). May 13 00:24:41.311827 systemd-logind[1422]: Removed session 16. May 13 00:24:41.343124 sshd[4000]: Accepted publickey for core from 10.0.0.1 port 46586 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:41.344646 sshd[4000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:41.348624 systemd-logind[1422]: New session 17 of user core. May 13 00:24:41.359624 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:24:41.579804 sshd[4000]: pam_unix(sshd:session): session closed for user core May 13 00:24:41.587812 systemd[1]: sshd@16-10.0.0.75:22-10.0.0.1:46586.service: Deactivated successfully. May 13 00:24:41.589667 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:24:41.591008 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. May 13 00:24:41.595901 systemd[1]: Started sshd@17-10.0.0.75:22-10.0.0.1:46598.service - OpenSSH per-connection server daemon (10.0.0.1:46598). May 13 00:24:41.597140 systemd-logind[1422]: Removed session 17. May 13 00:24:41.625959 sshd[4013]: Accepted publickey for core from 10.0.0.1 port 46598 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:41.627538 sshd[4013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:41.631395 systemd-logind[1422]: New session 18 of user core. May 13 00:24:41.644586 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:24:41.755646 sshd[4013]: pam_unix(sshd:session): session closed for user core May 13 00:24:41.758763 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. May 13 00:24:41.759036 systemd[1]: sshd@17-10.0.0.75:22-10.0.0.1:46598.service: Deactivated successfully. May 13 00:24:41.760916 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:24:41.761700 systemd-logind[1422]: Removed session 18. May 13 00:24:46.770096 systemd[1]: Started sshd@18-10.0.0.75:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). May 13 00:24:46.805514 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:46.806388 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:46.813854 systemd-logind[1422]: New session 19 of user core. May 13 00:24:46.827653 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:24:46.940841 sshd[4029]: pam_unix(sshd:session): session closed for user core May 13 00:24:46.944266 systemd[1]: sshd@18-10.0.0.75:22-10.0.0.1:37272.service: Deactivated successfully. May 13 00:24:46.946878 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:24:46.948112 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. May 13 00:24:46.949480 systemd-logind[1422]: Removed session 19. May 13 00:24:51.952143 systemd[1]: Started sshd@19-10.0.0.75:22-10.0.0.1:37276.service - OpenSSH per-connection server daemon (10.0.0.1:37276). May 13 00:24:51.985342 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 37276 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:51.986669 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:51.990644 systemd-logind[1422]: New session 20 of user core. May 13 00:24:52.002631 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:24:52.115601 sshd[4046]: pam_unix(sshd:session): session closed for user core May 13 00:24:52.119067 systemd[1]: sshd@19-10.0.0.75:22-10.0.0.1:37276.service: Deactivated successfully. May 13 00:24:52.120729 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:24:52.121329 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. May 13 00:24:52.122993 systemd-logind[1422]: Removed session 20. May 13 00:24:57.126187 systemd[1]: Started sshd@20-10.0.0.75:22-10.0.0.1:45900.service - OpenSSH per-connection server daemon (10.0.0.1:45900). May 13 00:24:57.159237 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 45900 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:57.160543 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:57.164021 systemd-logind[1422]: New session 21 of user core. May 13 00:24:57.173598 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:24:57.283067 sshd[4063]: pam_unix(sshd:session): session closed for user core May 13 00:24:57.291024 systemd[1]: sshd@20-10.0.0.75:22-10.0.0.1:45900.service: Deactivated successfully. May 13 00:24:57.293903 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:24:57.295242 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. May 13 00:24:57.299755 systemd[1]: Started sshd@21-10.0.0.75:22-10.0.0.1:45910.service - OpenSSH per-connection server daemon (10.0.0.1:45910). May 13 00:24:57.300607 systemd-logind[1422]: Removed session 21. May 13 00:24:57.328886 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 45910 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:24:57.330149 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:24:57.335179 systemd-logind[1422]: New session 22 of user core. May 13 00:24:57.347615 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:24:59.157920 containerd[1434]: time="2025-05-13T00:24:59.157381117Z" level=info msg="StopContainer for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" with timeout 30 (s)" May 13 00:24:59.158714 containerd[1434]: time="2025-05-13T00:24:59.158688283Z" level=info msg="Stop container \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" with signal terminated" May 13 00:24:59.169177 systemd[1]: cri-containerd-5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737.scope: Deactivated successfully. May 13 00:24:59.195302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737-rootfs.mount: Deactivated successfully. May 13 00:24:59.197127 containerd[1434]: time="2025-05-13T00:24:59.196868209Z" level=info msg="shim disconnected" id=5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737 namespace=k8s.io May 13 00:24:59.197127 containerd[1434]: time="2025-05-13T00:24:59.196985803Z" level=warning msg="cleaning up after shim disconnected" id=5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737 namespace=k8s.io May 13 00:24:59.197127 containerd[1434]: time="2025-05-13T00:24:59.196995922Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:59.201671 containerd[1434]: time="2025-05-13T00:24:59.201614062Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:24:59.220226 containerd[1434]: time="2025-05-13T00:24:59.220196373Z" level=info msg="StopContainer for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" with timeout 2 (s)" May 13 00:24:59.220473 containerd[1434]: time="2025-05-13T00:24:59.220448559Z" level=info msg="Stop container \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" with signal terminated" May 13 00:24:59.227110 systemd-networkd[1377]: lxc_health: Link DOWN May 13 00:24:59.227121 systemd-networkd[1377]: lxc_health: Lost carrier May 13 00:24:59.247999 containerd[1434]: time="2025-05-13T00:24:59.247943968Z" level=info msg="StopContainer for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" returns successfully" May 13 00:24:59.252114 systemd[1]: cri-containerd-4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26.scope: Deactivated successfully. May 13 00:24:59.252663 systemd[1]: cri-containerd-4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26.scope: Consumed 6.800s CPU time. May 13 00:24:59.257989 containerd[1434]: time="2025-05-13T00:24:59.257952803Z" level=info msg="StopPodSandbox for \"1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c\"" May 13 00:24:59.258076 containerd[1434]: time="2025-05-13T00:24:59.258004960Z" level=info msg="Container to stop \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:59.260163 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c-shm.mount: Deactivated successfully. May 13 00:24:59.266809 systemd[1]: cri-containerd-1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c.scope: Deactivated successfully. May 13 00:24:59.270609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26-rootfs.mount: Deactivated successfully. May 13 00:24:59.276156 containerd[1434]: time="2025-05-13T00:24:59.276106419Z" level=info msg="shim disconnected" id=4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26 namespace=k8s.io May 13 00:24:59.276156 containerd[1434]: time="2025-05-13T00:24:59.276153777Z" level=warning msg="cleaning up after shim disconnected" id=4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26 namespace=k8s.io May 13 00:24:59.276156 containerd[1434]: time="2025-05-13T00:24:59.276162056Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:59.290955 containerd[1434]: time="2025-05-13T00:24:59.290911984Z" level=info msg="StopContainer for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" returns successfully" May 13 00:24:59.291561 containerd[1434]: time="2025-05-13T00:24:59.291534749Z" level=info msg="StopPodSandbox for \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\"" May 13 00:24:59.291754 containerd[1434]: time="2025-05-13T00:24:59.291734658Z" level=info msg="Container to stop \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:59.291795 containerd[1434]: time="2025-05-13T00:24:59.291755216Z" level=info msg="Container to stop \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:59.291795 containerd[1434]: time="2025-05-13T00:24:59.291767096Z" level=info msg="Container to stop \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:59.291795 containerd[1434]: time="2025-05-13T00:24:59.291776535Z" level=info msg="Container to stop \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:59.291795 containerd[1434]: time="2025-05-13T00:24:59.291785815Z" level=info msg="Container to stop \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:24:59.293414 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31-shm.mount: Deactivated successfully. May 13 00:24:59.298173 systemd[1]: cri-containerd-4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31.scope: Deactivated successfully. May 13 00:24:59.316610 containerd[1434]: time="2025-05-13T00:24:59.316273033Z" level=info msg="shim disconnected" id=1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c namespace=k8s.io May 13 00:24:59.316610 containerd[1434]: time="2025-05-13T00:24:59.316609254Z" level=warning msg="cleaning up after shim disconnected" id=1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c namespace=k8s.io May 13 00:24:59.316610 containerd[1434]: time="2025-05-13T00:24:59.316620374Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:59.331748 containerd[1434]: time="2025-05-13T00:24:59.331547692Z" level=info msg="shim disconnected" id=4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31 namespace=k8s.io May 13 00:24:59.331748 containerd[1434]: time="2025-05-13T00:24:59.331602929Z" level=warning msg="cleaning up after shim disconnected" id=4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31 namespace=k8s.io May 13 00:24:59.331748 containerd[1434]: time="2025-05-13T00:24:59.331611288Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:24:59.336517 containerd[1434]: time="2025-05-13T00:24:59.336485653Z" level=info msg="TearDown network for sandbox \"1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c\" successfully" May 13 00:24:59.336517 containerd[1434]: time="2025-05-13T00:24:59.336513132Z" level=info msg="StopPodSandbox for \"1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c\" returns successfully" May 13 00:24:59.347028 containerd[1434]: time="2025-05-13T00:24:59.346789672Z" level=info msg="TearDown network for sandbox \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" successfully" May 13 00:24:59.347028 containerd[1434]: time="2025-05-13T00:24:59.347026778Z" level=info msg="StopPodSandbox for \"4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31\" returns successfully" May 13 00:24:59.407724 kubelet[2457]: I0513 00:24:59.407657 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-xtables-lock\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.407724 kubelet[2457]: I0513 00:24:59.407704 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-bpf-maps\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.407724 kubelet[2457]: I0513 00:24:59.407726 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnrrv\" (UniqueName: \"kubernetes.io/projected/85303fae-407f-44da-b06e-ee1188ca1697-kube-api-access-fnrrv\") pod \"85303fae-407f-44da-b06e-ee1188ca1697\" (UID: \"85303fae-407f-44da-b06e-ee1188ca1697\") " May 13 00:24:59.408329 kubelet[2457]: I0513 00:24:59.407748 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-cgroup\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408329 kubelet[2457]: I0513 00:24:59.407765 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85303fae-407f-44da-b06e-ee1188ca1697-cilium-config-path\") pod \"85303fae-407f-44da-b06e-ee1188ca1697\" (UID: \"85303fae-407f-44da-b06e-ee1188ca1697\") " May 13 00:24:59.408329 kubelet[2457]: I0513 00:24:59.407780 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cni-path\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408329 kubelet[2457]: I0513 00:24:59.407797 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c87dj\" (UniqueName: \"kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-kube-api-access-c87dj\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408329 kubelet[2457]: I0513 00:24:59.407814 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-config-path\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408329 kubelet[2457]: I0513 00:24:59.407829 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-run\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408513 kubelet[2457]: I0513 00:24:59.407914 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-net\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408513 kubelet[2457]: I0513 00:24:59.407943 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edf65f11-7682-4b4e-aa0f-28ac07d5b993-clustermesh-secrets\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408513 kubelet[2457]: I0513 00:24:59.407964 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-lib-modules\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408513 kubelet[2457]: I0513 00:24:59.407978 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hostproc\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408513 kubelet[2457]: I0513 00:24:59.407992 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-etc-cni-netd\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408513 kubelet[2457]: I0513 00:24:59.408309 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-kernel\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.408820 kubelet[2457]: I0513 00:24:59.408359 2457 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hubble-tls\") pod \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\" (UID: \"edf65f11-7682-4b4e-aa0f-28ac07d5b993\") " May 13 00:24:59.413792 kubelet[2457]: I0513 00:24:59.413426 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.413792 kubelet[2457]: I0513 00:24:59.413501 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.413792 kubelet[2457]: I0513 00:24:59.413519 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.413792 kubelet[2457]: I0513 00:24:59.413535 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.413792 kubelet[2457]: I0513 00:24:59.413550 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cni-path" (OuterVolumeSpecName: "cni-path") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.414312 kubelet[2457]: I0513 00:24:59.414278 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85303fae-407f-44da-b06e-ee1188ca1697-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "85303fae-407f-44da-b06e-ee1188ca1697" (UID: "85303fae-407f-44da-b06e-ee1188ca1697"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:24:59.414362 kubelet[2457]: I0513 00:24:59.414333 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hostproc" (OuterVolumeSpecName: "hostproc") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.414786 kubelet[2457]: I0513 00:24:59.414751 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85303fae-407f-44da-b06e-ee1188ca1697-kube-api-access-fnrrv" (OuterVolumeSpecName: "kube-api-access-fnrrv") pod "85303fae-407f-44da-b06e-ee1188ca1697" (UID: "85303fae-407f-44da-b06e-ee1188ca1697"). InnerVolumeSpecName "kube-api-access-fnrrv". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:24:59.414855 kubelet[2457]: I0513 00:24:59.414781 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:24:59.414855 kubelet[2457]: I0513 00:24:59.414818 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.414855 kubelet[2457]: I0513 00:24:59.414835 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.414855 kubelet[2457]: I0513 00:24:59.414851 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.414948 kubelet[2457]: I0513 00:24:59.414866 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 13 00:24:59.416165 kubelet[2457]: I0513 00:24:59.416136 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-kube-api-access-c87dj" (OuterVolumeSpecName: "kube-api-access-c87dj") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "kube-api-access-c87dj". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 13 00:24:59.416550 kubelet[2457]: I0513 00:24:59.416523 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 13 00:24:59.418091 kubelet[2457]: I0513 00:24:59.418059 2457 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf65f11-7682-4b4e-aa0f-28ac07d5b993-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "edf65f11-7682-4b4e-aa0f-28ac07d5b993" (UID: "edf65f11-7682-4b4e-aa0f-28ac07d5b993"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510542 2457 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510576 2457 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edf65f11-7682-4b4e-aa0f-28ac07d5b993-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510586 2457 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510595 2457 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510603 2457 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510611 2457 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510599 kubelet[2457]: I0513 00:24:59.510626 2457 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510634 2457 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510641 2457 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510649 2457 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fnrrv\" (UniqueName: \"kubernetes.io/projected/85303fae-407f-44da-b06e-ee1188ca1697-kube-api-access-fnrrv\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510656 2457 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510664 2457 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/85303fae-407f-44da-b06e-ee1188ca1697-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510671 2457 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510679 2457 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.510848 kubelet[2457]: I0513 00:24:59.510687 2457 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c87dj\" (UniqueName: \"kubernetes.io/projected/edf65f11-7682-4b4e-aa0f-28ac07d5b993-kube-api-access-c87dj\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.511012 kubelet[2457]: I0513 00:24:59.510694 2457 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edf65f11-7682-4b4e-aa0f-28ac07d5b993-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:24:59.600839 systemd[1]: Removed slice kubepods-besteffort-pod85303fae_407f_44da_b06e_ee1188ca1697.slice - libcontainer container kubepods-besteffort-pod85303fae_407f_44da_b06e_ee1188ca1697.slice. May 13 00:24:59.602313 systemd[1]: Removed slice kubepods-burstable-podedf65f11_7682_4b4e_aa0f_28ac07d5b993.slice - libcontainer container kubepods-burstable-podedf65f11_7682_4b4e_aa0f_28ac07d5b993.slice. May 13 00:24:59.602392 systemd[1]: kubepods-burstable-podedf65f11_7682_4b4e_aa0f_28ac07d5b993.slice: Consumed 6.921s CPU time. May 13 00:24:59.646837 kubelet[2457]: E0513 00:24:59.646794 2457 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:24:59.789893 kubelet[2457]: I0513 00:24:59.789865 2457 scope.go:117] "RemoveContainer" containerID="4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26" May 13 00:24:59.794498 containerd[1434]: time="2025-05-13T00:24:59.793601626Z" level=info msg="RemoveContainer for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\"" May 13 00:24:59.835573 containerd[1434]: time="2025-05-13T00:24:59.835528741Z" level=info msg="RemoveContainer for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" returns successfully" May 13 00:24:59.836124 kubelet[2457]: I0513 00:24:59.836013 2457 scope.go:117] "RemoveContainer" containerID="9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464" May 13 00:24:59.837596 containerd[1434]: time="2025-05-13T00:24:59.837426714Z" level=info msg="RemoveContainer for \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\"" May 13 00:24:59.840501 containerd[1434]: time="2025-05-13T00:24:59.840422385Z" level=info msg="RemoveContainer for \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\" returns successfully" May 13 00:24:59.843406 kubelet[2457]: I0513 00:24:59.841535 2457 scope.go:117] "RemoveContainer" containerID="cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f" May 13 00:24:59.844537 containerd[1434]: time="2025-05-13T00:24:59.844508194Z" level=info msg="RemoveContainer for \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\"" May 13 00:24:59.846610 containerd[1434]: time="2025-05-13T00:24:59.846571518Z" level=info msg="RemoveContainer for \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\" returns successfully" May 13 00:24:59.846791 kubelet[2457]: I0513 00:24:59.846759 2457 scope.go:117] "RemoveContainer" containerID="f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd" May 13 00:24:59.847650 containerd[1434]: time="2025-05-13T00:24:59.847625258Z" level=info msg="RemoveContainer for \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\"" May 13 00:24:59.849875 containerd[1434]: time="2025-05-13T00:24:59.849794016Z" level=info msg="RemoveContainer for \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\" returns successfully" May 13 00:24:59.849977 kubelet[2457]: I0513 00:24:59.849951 2457 scope.go:117] "RemoveContainer" containerID="ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9" May 13 00:24:59.850815 containerd[1434]: time="2025-05-13T00:24:59.850790120Z" level=info msg="RemoveContainer for \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\"" May 13 00:24:59.852959 containerd[1434]: time="2025-05-13T00:24:59.852926199Z" level=info msg="RemoveContainer for \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\" returns successfully" May 13 00:24:59.853180 kubelet[2457]: I0513 00:24:59.853093 2457 scope.go:117] "RemoveContainer" containerID="4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26" May 13 00:24:59.853343 containerd[1434]: time="2025-05-13T00:24:59.853301418Z" level=error msg="ContainerStatus for \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\": not found" May 13 00:24:59.853457 kubelet[2457]: E0513 00:24:59.853423 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\": not found" containerID="4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26" May 13 00:24:59.853544 kubelet[2457]: I0513 00:24:59.853466 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26"} err="failed to get container status \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c7cda2b3d2383d0db543ff0a64c644a35ed790c447a9edc8c9ab4c4e0423d26\": not found" May 13 00:24:59.853637 kubelet[2457]: I0513 00:24:59.853545 2457 scope.go:117] "RemoveContainer" containerID="9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464" May 13 00:24:59.853844 containerd[1434]: time="2025-05-13T00:24:59.853775352Z" level=error msg="ContainerStatus for \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\": not found" May 13 00:24:59.853913 kubelet[2457]: E0513 00:24:59.853889 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\": not found" containerID="9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464" May 13 00:24:59.853944 kubelet[2457]: I0513 00:24:59.853916 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464"} err="failed to get container status \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\": rpc error: code = NotFound desc = an error occurred when try to find container \"9669c5b5a44a2db0e50a8b1542739e14f9278e0697ed7ed62971ab04cd708464\": not found" May 13 00:24:59.853944 kubelet[2457]: I0513 00:24:59.853932 2457 scope.go:117] "RemoveContainer" containerID="cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f" May 13 00:24:59.854079 containerd[1434]: time="2025-05-13T00:24:59.854052616Z" level=error msg="ContainerStatus for \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\": not found" May 13 00:24:59.854179 kubelet[2457]: E0513 00:24:59.854161 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\": not found" containerID="cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f" May 13 00:24:59.854205 kubelet[2457]: I0513 00:24:59.854183 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f"} err="failed to get container status \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf42ac5d16df7a7bcb6e7b5b2e3819d3e261a75ec50f4b67f301c814c8e56b2f\": not found" May 13 00:24:59.854205 kubelet[2457]: I0513 00:24:59.854195 2457 scope.go:117] "RemoveContainer" containerID="f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd" May 13 00:24:59.854388 containerd[1434]: time="2025-05-13T00:24:59.854358039Z" level=error msg="ContainerStatus for \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\": not found" May 13 00:24:59.854549 kubelet[2457]: E0513 00:24:59.854527 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\": not found" containerID="f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd" May 13 00:24:59.854589 kubelet[2457]: I0513 00:24:59.854559 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd"} err="failed to get container status \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"f67256480939ac970ff682de3e605f808673f9809393a24dbfe943b5a69b21dd\": not found" May 13 00:24:59.854589 kubelet[2457]: I0513 00:24:59.854580 2457 scope.go:117] "RemoveContainer" containerID="ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9" May 13 00:24:59.854751 containerd[1434]: time="2025-05-13T00:24:59.854723058Z" level=error msg="ContainerStatus for \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\": not found" May 13 00:24:59.854835 kubelet[2457]: E0513 00:24:59.854817 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\": not found" containerID="ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9" May 13 00:24:59.854865 kubelet[2457]: I0513 00:24:59.854840 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9"} err="failed to get container status \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec176bcc37244d044b9336188f3d468764793f67698b476063db6bb14fc62ac9\": not found" May 13 00:24:59.854865 kubelet[2457]: I0513 00:24:59.854858 2457 scope.go:117] "RemoveContainer" containerID="5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737" May 13 00:24:59.855595 containerd[1434]: time="2025-05-13T00:24:59.855570570Z" level=info msg="RemoveContainer for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\"" May 13 00:24:59.857797 containerd[1434]: time="2025-05-13T00:24:59.857755807Z" level=info msg="RemoveContainer for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" returns successfully" May 13 00:24:59.858032 kubelet[2457]: I0513 00:24:59.857943 2457 scope.go:117] "RemoveContainer" containerID="5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737" May 13 00:24:59.858328 containerd[1434]: time="2025-05-13T00:24:59.858289177Z" level=error msg="ContainerStatus for \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\": not found" May 13 00:24:59.858421 kubelet[2457]: E0513 00:24:59.858402 2457 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\": not found" containerID="5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737" May 13 00:24:59.858506 kubelet[2457]: I0513 00:24:59.858426 2457 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737"} err="failed to get container status \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\": rpc error: code = NotFound desc = an error occurred when try to find container \"5766079a29e85bd53812c5ccbdbeeab69158c4bf397164a37dac4ccd10170737\": not found" May 13 00:25:00.172146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4897990a2c02bc9821dfde5166524138856b33561b3a5372fa06eeed38f8ce31-rootfs.mount: Deactivated successfully. May 13 00:25:00.172235 systemd[1]: var-lib-kubelet-pods-edf65f11\x2d7682\x2d4b4e\x2daa0f\x2d28ac07d5b993-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:25:00.172285 systemd[1]: var-lib-kubelet-pods-edf65f11\x2d7682\x2d4b4e\x2daa0f\x2d28ac07d5b993-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:25:00.172339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ae8e2ebd370ce5a10ee0f1142ad69572ac9ad93c4caadb8b2501dc530bedb6c-rootfs.mount: Deactivated successfully. May 13 00:25:00.172391 systemd[1]: var-lib-kubelet-pods-85303fae\x2d407f\x2d44da\x2db06e\x2dee1188ca1697-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfnrrv.mount: Deactivated successfully. May 13 00:25:00.172454 systemd[1]: var-lib-kubelet-pods-edf65f11\x2d7682\x2d4b4e\x2daa0f\x2d28ac07d5b993-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc87dj.mount: Deactivated successfully. May 13 00:25:01.107319 sshd[4078]: pam_unix(sshd:session): session closed for user core May 13 00:25:01.116961 systemd[1]: sshd@21-10.0.0.75:22-10.0.0.1:45910.service: Deactivated successfully. May 13 00:25:01.118587 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:25:01.118875 systemd[1]: session-22.scope: Consumed 1.130s CPU time. May 13 00:25:01.120104 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. May 13 00:25:01.124831 systemd[1]: Started sshd@22-10.0.0.75:22-10.0.0.1:45924.service - OpenSSH per-connection server daemon (10.0.0.1:45924). May 13 00:25:01.125856 systemd-logind[1422]: Removed session 22. May 13 00:25:01.153873 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 45924 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:25:01.155070 sshd[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:01.158915 systemd-logind[1422]: New session 23 of user core. May 13 00:25:01.168660 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:25:01.259912 kubelet[2457]: I0513 00:25:01.259399 2457 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:25:01Z","lastTransitionTime":"2025-05-13T00:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:25:01.594155 kubelet[2457]: I0513 00:25:01.594111 2457 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85303fae-407f-44da-b06e-ee1188ca1697" path="/var/lib/kubelet/pods/85303fae-407f-44da-b06e-ee1188ca1697/volumes" May 13 00:25:01.594551 kubelet[2457]: I0513 00:25:01.594529 2457 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf65f11-7682-4b4e-aa0f-28ac07d5b993" path="/var/lib/kubelet/pods/edf65f11-7682-4b4e-aa0f-28ac07d5b993/volumes" May 13 00:25:01.845747 sshd[4242]: pam_unix(sshd:session): session closed for user core May 13 00:25:01.853223 systemd[1]: sshd@22-10.0.0.75:22-10.0.0.1:45924.service: Deactivated successfully. May 13 00:25:01.856255 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:25:01.860963 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. May 13 00:25:01.879431 kubelet[2457]: I0513 00:25:01.878033 2457 memory_manager.go:355] "RemoveStaleState removing state" podUID="85303fae-407f-44da-b06e-ee1188ca1697" containerName="cilium-operator" May 13 00:25:01.879431 kubelet[2457]: I0513 00:25:01.878066 2457 memory_manager.go:355] "RemoveStaleState removing state" podUID="edf65f11-7682-4b4e-aa0f-28ac07d5b993" containerName="cilium-agent" May 13 00:25:01.879780 systemd[1]: Started sshd@23-10.0.0.75:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). May 13 00:25:01.880554 systemd-logind[1422]: Removed session 23. May 13 00:25:01.890318 systemd[1]: Created slice kubepods-burstable-podd4d27d32_b5a8_408d_adb2_173471b84012.slice - libcontainer container kubepods-burstable-podd4d27d32_b5a8_408d_adb2_173471b84012.slice. May 13 00:25:01.917630 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:25:01.918935 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:01.922221 systemd-logind[1422]: New session 24 of user core. May 13 00:25:01.927547 kubelet[2457]: I0513 00:25:01.927497 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-bpf-maps\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927547 kubelet[2457]: I0513 00:25:01.927532 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-etc-cni-netd\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927687 kubelet[2457]: I0513 00:25:01.927554 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d4d27d32-b5a8-408d-adb2-173471b84012-cilium-ipsec-secrets\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927687 kubelet[2457]: I0513 00:25:01.927570 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-cilium-cgroup\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927687 kubelet[2457]: I0513 00:25:01.927586 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5b29\" (UniqueName: \"kubernetes.io/projected/d4d27d32-b5a8-408d-adb2-173471b84012-kube-api-access-s5b29\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927687 kubelet[2457]: I0513 00:25:01.927602 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-cni-path\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927687 kubelet[2457]: I0513 00:25:01.927617 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d4d27d32-b5a8-408d-adb2-173471b84012-cilium-config-path\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927861 kubelet[2457]: I0513 00:25:01.927632 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-host-proc-sys-kernel\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927861 kubelet[2457]: I0513 00:25:01.927647 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-hostproc\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927861 kubelet[2457]: I0513 00:25:01.927664 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d4d27d32-b5a8-408d-adb2-173471b84012-clustermesh-secrets\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927861 kubelet[2457]: I0513 00:25:01.927679 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d4d27d32-b5a8-408d-adb2-173471b84012-hubble-tls\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927861 kubelet[2457]: I0513 00:25:01.927694 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-lib-modules\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927861 kubelet[2457]: I0513 00:25:01.927711 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-xtables-lock\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927988 kubelet[2457]: I0513 00:25:01.927727 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-host-proc-sys-net\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.927988 kubelet[2457]: I0513 00:25:01.927743 2457 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d4d27d32-b5a8-408d-adb2-173471b84012-cilium-run\") pod \"cilium-gm5hc\" (UID: \"d4d27d32-b5a8-408d-adb2-173471b84012\") " pod="kube-system/cilium-gm5hc" May 13 00:25:01.934583 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:25:01.984083 sshd[4256]: pam_unix(sshd:session): session closed for user core May 13 00:25:01.998573 systemd[1]: sshd@23-10.0.0.75:22-10.0.0.1:45928.service: Deactivated successfully. May 13 00:25:02.000142 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:25:02.003390 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. May 13 00:25:02.016739 systemd[1]: Started sshd@24-10.0.0.75:22-10.0.0.1:45936.service - OpenSSH per-connection server daemon (10.0.0.1:45936). May 13 00:25:02.017708 systemd-logind[1422]: Removed session 24. May 13 00:25:02.053552 sshd[4264]: Accepted publickey for core from 10.0.0.1 port 45936 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:25:02.054771 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:25:02.058224 systemd-logind[1422]: New session 25 of user core. May 13 00:25:02.067674 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:25:02.194412 kubelet[2457]: E0513 00:25:02.194271 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:02.195683 containerd[1434]: time="2025-05-13T00:25:02.195092767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gm5hc,Uid:d4d27d32-b5a8-408d-adb2-173471b84012,Namespace:kube-system,Attempt:0,}" May 13 00:25:02.225636 containerd[1434]: time="2025-05-13T00:25:02.225543559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:25:02.225636 containerd[1434]: time="2025-05-13T00:25:02.225606036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:25:02.225636 containerd[1434]: time="2025-05-13T00:25:02.225617435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:02.225636 containerd[1434]: time="2025-05-13T00:25:02.225722710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:25:02.242619 systemd[1]: Started cri-containerd-c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48.scope - libcontainer container c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48. May 13 00:25:02.261656 containerd[1434]: time="2025-05-13T00:25:02.261599923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gm5hc,Uid:d4d27d32-b5a8-408d-adb2-173471b84012,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\"" May 13 00:25:02.262283 kubelet[2457]: E0513 00:25:02.262257 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:02.265071 containerd[1434]: time="2025-05-13T00:25:02.264976803Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:25:02.275140 containerd[1434]: time="2025-05-13T00:25:02.274995926Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b\"" May 13 00:25:02.275846 containerd[1434]: time="2025-05-13T00:25:02.275817807Z" level=info msg="StartContainer for \"6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b\"" May 13 00:25:02.304714 systemd[1]: Started cri-containerd-6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b.scope - libcontainer container 6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b. May 13 00:25:02.324717 containerd[1434]: time="2025-05-13T00:25:02.324677803Z" level=info msg="StartContainer for \"6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b\" returns successfully" May 13 00:25:02.335517 systemd[1]: cri-containerd-6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b.scope: Deactivated successfully. May 13 00:25:02.360472 containerd[1434]: time="2025-05-13T00:25:02.360386664Z" level=info msg="shim disconnected" id=6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b namespace=k8s.io May 13 00:25:02.360472 containerd[1434]: time="2025-05-13T00:25:02.360460981Z" level=warning msg="cleaning up after shim disconnected" id=6569d422039c86cde0e09c5c79f8f703c8a40b380ebebedeb682f3a145ae003b namespace=k8s.io May 13 00:25:02.360472 containerd[1434]: time="2025-05-13T00:25:02.360472820Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:25:02.801553 kubelet[2457]: E0513 00:25:02.801464 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:02.803768 containerd[1434]: time="2025-05-13T00:25:02.803537664Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:25:02.812899 containerd[1434]: time="2025-05-13T00:25:02.812834861Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182\"" May 13 00:25:02.813504 containerd[1434]: time="2025-05-13T00:25:02.813477191Z" level=info msg="StartContainer for \"19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182\"" May 13 00:25:02.845628 systemd[1]: Started cri-containerd-19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182.scope - libcontainer container 19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182. May 13 00:25:02.866694 containerd[1434]: time="2025-05-13T00:25:02.866653901Z" level=info msg="StartContainer for \"19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182\" returns successfully" May 13 00:25:02.874096 systemd[1]: cri-containerd-19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182.scope: Deactivated successfully. May 13 00:25:02.892033 containerd[1434]: time="2025-05-13T00:25:02.891976337Z" level=info msg="shim disconnected" id=19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182 namespace=k8s.io May 13 00:25:02.892033 containerd[1434]: time="2025-05-13T00:25:02.892031534Z" level=warning msg="cleaning up after shim disconnected" id=19259d7867da4484a7647139dd755f9edc8cbf2bbbe67fb9650620e7617e0182 namespace=k8s.io May 13 00:25:02.892033 containerd[1434]: time="2025-05-13T00:25:02.892040814Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:25:03.805170 kubelet[2457]: E0513 00:25:03.805129 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:03.807999 containerd[1434]: time="2025-05-13T00:25:03.807883261Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:25:03.822561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090382152.mount: Deactivated successfully. May 13 00:25:03.822952 containerd[1434]: time="2025-05-13T00:25:03.822916427Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7\"" May 13 00:25:03.824524 containerd[1434]: time="2025-05-13T00:25:03.824488317Z" level=info msg="StartContainer for \"67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7\"" May 13 00:25:03.850644 systemd[1]: Started cri-containerd-67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7.scope - libcontainer container 67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7. May 13 00:25:03.873907 containerd[1434]: time="2025-05-13T00:25:03.873847705Z" level=info msg="StartContainer for \"67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7\" returns successfully" May 13 00:25:03.874587 systemd[1]: cri-containerd-67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7.scope: Deactivated successfully. May 13 00:25:03.896502 containerd[1434]: time="2025-05-13T00:25:03.896430214Z" level=info msg="shim disconnected" id=67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7 namespace=k8s.io May 13 00:25:03.896502 containerd[1434]: time="2025-05-13T00:25:03.896497610Z" level=warning msg="cleaning up after shim disconnected" id=67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7 namespace=k8s.io May 13 00:25:03.896502 containerd[1434]: time="2025-05-13T00:25:03.896506050Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:25:04.032510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67e21d7850888852568d6f40591515999db356a8b827244c7af265b1ec70e4b7-rootfs.mount: Deactivated successfully. May 13 00:25:04.648259 kubelet[2457]: E0513 00:25:04.648205 2457 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:25:04.810250 kubelet[2457]: E0513 00:25:04.810007 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:04.815601 containerd[1434]: time="2025-05-13T00:25:04.815548451Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:25:04.829909 containerd[1434]: time="2025-05-13T00:25:04.829754213Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02\"" May 13 00:25:04.830229 containerd[1434]: time="2025-05-13T00:25:04.830202114Z" level=info msg="StartContainer for \"fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02\"" May 13 00:25:04.854617 systemd[1]: Started cri-containerd-fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02.scope - libcontainer container fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02. May 13 00:25:04.874296 systemd[1]: cri-containerd-fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02.scope: Deactivated successfully. May 13 00:25:04.877005 containerd[1434]: time="2025-05-13T00:25:04.875080264Z" level=info msg="StartContainer for \"fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02\" returns successfully" May 13 00:25:04.895341 containerd[1434]: time="2025-05-13T00:25:04.895263413Z" level=info msg="shim disconnected" id=fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02 namespace=k8s.io May 13 00:25:04.895341 containerd[1434]: time="2025-05-13T00:25:04.895329771Z" level=warning msg="cleaning up after shim disconnected" id=fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02 namespace=k8s.io May 13 00:25:04.895341 containerd[1434]: time="2025-05-13T00:25:04.895339170Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:25:05.032519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc01b89887ab1bbe18b83e2c24a36b7346e7455c4fb8b03d464021d2796fa02-rootfs.mount: Deactivated successfully. May 13 00:25:05.814702 kubelet[2457]: E0513 00:25:05.814645 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:05.819433 containerd[1434]: time="2025-05-13T00:25:05.818005655Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:25:05.832699 containerd[1434]: time="2025-05-13T00:25:05.832593878Z" level=info msg="CreateContainer within sandbox \"c3cbb3e9cf0b5ec3f49b1c266fddadcb310fdc1a0b63b0cd020f3c9e5a310a48\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b706083e358c5a55f87ed77e72f982402609f187f27297ef5dbf4b1127b98671\"" May 13 00:25:05.834151 containerd[1434]: time="2025-05-13T00:25:05.833676196Z" level=info msg="StartContainer for \"b706083e358c5a55f87ed77e72f982402609f187f27297ef5dbf4b1127b98671\"" May 13 00:25:05.861601 systemd[1]: Started cri-containerd-b706083e358c5a55f87ed77e72f982402609f187f27297ef5dbf4b1127b98671.scope - libcontainer container b706083e358c5a55f87ed77e72f982402609f187f27297ef5dbf4b1127b98671. May 13 00:25:05.885119 containerd[1434]: time="2025-05-13T00:25:05.885080244Z" level=info msg="StartContainer for \"b706083e358c5a55f87ed77e72f982402609f187f27297ef5dbf4b1127b98671\" returns successfully" May 13 00:25:06.148494 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 00:25:06.818250 kubelet[2457]: E0513 00:25:06.818147 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:08.195655 kubelet[2457]: E0513 00:25:08.195566 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:08.592262 kubelet[2457]: E0513 00:25:08.592132 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:08.998811 systemd-networkd[1377]: lxc_health: Link UP May 13 00:25:08.999267 systemd-networkd[1377]: lxc_health: Gained carrier May 13 00:25:10.194896 kubelet[2457]: E0513 00:25:10.194842 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:10.217757 kubelet[2457]: I0513 00:25:10.217692 2457 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gm5hc" podStartSLOduration=9.217676373 podStartE2EDuration="9.217676373s" podCreationTimestamp="2025-05-13 00:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:25:06.833234137 +0000 UTC m=+77.321873622" watchObservedRunningTime="2025-05-13 00:25:10.217676373 +0000 UTC m=+80.706315858" May 13 00:25:10.591591 systemd-networkd[1377]: lxc_health: Gained IPv6LL May 13 00:25:10.826628 kubelet[2457]: E0513 00:25:10.826052 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:11.827910 kubelet[2457]: E0513 00:25:11.827854 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:13.592431 kubelet[2457]: E0513 00:25:13.592348 2457 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:25:14.778635 sshd[4264]: pam_unix(sshd:session): session closed for user core May 13 00:25:14.782763 systemd[1]: sshd@24-10.0.0.75:22-10.0.0.1:45936.service: Deactivated successfully. May 13 00:25:14.784799 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:25:14.786237 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. May 13 00:25:14.787612 systemd-logind[1422]: Removed session 25.