May 13 00:28:12.910867 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:28:12.910891 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:28:12.910901 kernel: KASLR enabled May 13 00:28:12.910907 kernel: efi: EFI v2.7 by EDK II May 13 00:28:12.910913 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:28:12.910919 kernel: random: crng init done May 13 00:28:12.910926 kernel: ACPI: Early table checksum verification disabled May 13 00:28:12.910932 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:28:12.910938 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:28:12.910946 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910952 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910958 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910964 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910970 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910977 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910986 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910992 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.910999 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:28:12.911005 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:28:12.911011 kernel: NUMA: Failed to initialise from firmware May 13 00:28:12.911018 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:28:12.911024 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 00:28:12.911030 kernel: Zone ranges: May 13 00:28:12.911036 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:28:12.911042 kernel: DMA32 empty May 13 00:28:12.911050 kernel: Normal empty May 13 00:28:12.911056 kernel: Movable zone start for each node May 13 00:28:12.911062 kernel: Early memory node ranges May 13 00:28:12.911068 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:28:12.911074 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:28:12.911081 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:28:12.911087 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:28:12.911093 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:28:12.911105 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:28:12.911112 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:28:12.911118 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:28:12.911125 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:28:12.911133 kernel: psci: probing for conduit method from ACPI. May 13 00:28:12.911139 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:28:12.911145 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:28:12.911155 kernel: psci: Trusted OS migration not required May 13 00:28:12.911162 kernel: psci: SMC Calling Convention v1.1 May 13 00:28:12.911169 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:28:12.911177 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:28:12.911183 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:28:12.911190 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:28:12.911197 kernel: Detected PIPT I-cache on CPU0 May 13 00:28:12.911203 kernel: CPU features: detected: GIC system register CPU interface May 13 00:28:12.911210 kernel: CPU features: detected: Hardware dirty bit management May 13 00:28:12.911217 kernel: CPU features: detected: Spectre-v4 May 13 00:28:12.911223 kernel: CPU features: detected: Spectre-BHB May 13 00:28:12.911230 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:28:12.911239 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:28:12.911247 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:28:12.911254 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:28:12.911260 kernel: alternatives: applying boot alternatives May 13 00:28:12.911268 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:28:12.911275 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:28:12.911282 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:28:12.911289 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:28:12.911295 kernel: Fallback order for Node 0: 0 May 13 00:28:12.911323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:28:12.911330 kernel: Policy zone: DMA May 13 00:28:12.911337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:28:12.911345 kernel: software IO TLB: area num 4. May 13 00:28:12.911352 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:28:12.911359 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 13 00:28:12.911366 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:28:12.911373 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:28:12.911380 kernel: rcu: RCU event tracing is enabled. May 13 00:28:12.911387 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:28:12.911394 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:28:12.911401 kernel: Tracing variant of Tasks RCU enabled. May 13 00:28:12.911410 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:28:12.911417 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:28:12.911424 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:28:12.911432 kernel: GICv3: 256 SPIs implemented May 13 00:28:12.911438 kernel: GICv3: 0 Extended SPIs implemented May 13 00:28:12.911445 kernel: Root IRQ handler: gic_handle_irq May 13 00:28:12.911452 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:28:12.911458 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:28:12.911465 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:28:12.911472 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:28:12.911479 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:28:12.911486 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:28:12.911493 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:28:12.911499 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:28:12.911508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:28:12.911515 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:28:12.911522 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:28:12.911529 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:28:12.911535 kernel: arm-pv: using stolen time PV May 13 00:28:12.911542 kernel: Console: colour dummy device 80x25 May 13 00:28:12.911549 kernel: ACPI: Core revision 20230628 May 13 00:28:12.911556 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:28:12.911563 kernel: pid_max: default: 32768 minimum: 301 May 13 00:28:12.911570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:28:12.911578 kernel: landlock: Up and running. May 13 00:28:12.911585 kernel: SELinux: Initializing. May 13 00:28:12.911592 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:28:12.911600 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:28:12.911606 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:28:12.911614 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:28:12.911621 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:28:12.911628 kernel: rcu: Hierarchical SRCU implementation. May 13 00:28:12.911635 kernel: rcu: Max phase no-delay instances is 400. May 13 00:28:12.911643 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:28:12.911650 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:28:12.911657 kernel: Remapping and enabling EFI services. May 13 00:28:12.911663 kernel: smp: Bringing up secondary CPUs ... May 13 00:28:12.911670 kernel: Detected PIPT I-cache on CPU1 May 13 00:28:12.911677 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:28:12.911684 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:28:12.911691 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:28:12.911698 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:28:12.911706 kernel: Detected PIPT I-cache on CPU2 May 13 00:28:12.911713 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:28:12.911720 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:28:12.911732 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:28:12.911740 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:28:12.911748 kernel: Detected PIPT I-cache on CPU3 May 13 00:28:12.911755 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:28:12.911762 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:28:12.911769 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:28:12.911776 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:28:12.911783 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:28:12.911792 kernel: SMP: Total of 4 processors activated. May 13 00:28:12.911799 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:28:12.911811 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:28:12.911819 kernel: CPU features: detected: Common not Private translations May 13 00:28:12.911826 kernel: CPU features: detected: CRC32 instructions May 13 00:28:12.911834 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:28:12.911843 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:28:12.911850 kernel: CPU features: detected: LSE atomic instructions May 13 00:28:12.911857 kernel: CPU features: detected: Privileged Access Never May 13 00:28:12.911864 kernel: CPU features: detected: RAS Extension Support May 13 00:28:12.911871 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:28:12.911879 kernel: CPU: All CPU(s) started at EL1 May 13 00:28:12.911886 kernel: alternatives: applying system-wide alternatives May 13 00:28:12.911893 kernel: devtmpfs: initialized May 13 00:28:12.911901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:28:12.911909 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:28:12.911917 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:28:12.911924 kernel: SMBIOS 3.0.0 present. May 13 00:28:12.911931 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:28:12.911939 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:28:12.911946 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:28:12.911953 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:28:12.911961 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:28:12.911969 kernel: audit: initializing netlink subsys (disabled) May 13 00:28:12.911976 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 13 00:28:12.911985 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:28:12.911992 kernel: cpuidle: using governor menu May 13 00:28:12.911999 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:28:12.912007 kernel: ASID allocator initialised with 32768 entries May 13 00:28:12.912014 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:28:12.912022 kernel: Serial: AMBA PL011 UART driver May 13 00:28:12.912029 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:28:12.912037 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:28:12.912044 kernel: Modules: 509008 pages in range for PLT usage May 13 00:28:12.912053 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:28:12.912060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:28:12.912067 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:28:12.912075 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:28:12.912082 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:28:12.912090 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:28:12.912097 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:28:12.912104 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:28:12.912111 kernel: ACPI: Added _OSI(Module Device) May 13 00:28:12.912120 kernel: ACPI: Added _OSI(Processor Device) May 13 00:28:12.912127 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:28:12.912134 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:28:12.912141 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:28:12.912149 kernel: ACPI: Interpreter enabled May 13 00:28:12.912156 kernel: ACPI: Using GIC for interrupt routing May 13 00:28:12.912163 kernel: ACPI: MCFG table detected, 1 entries May 13 00:28:12.912170 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:28:12.912177 kernel: printk: console [ttyAMA0] enabled May 13 00:28:12.912186 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:28:12.912354 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:28:12.912434 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:28:12.912500 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:28:12.912565 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:28:12.912629 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:28:12.912639 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:28:12.912650 kernel: PCI host bridge to bus 0000:00 May 13 00:28:12.912723 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:28:12.912783 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:28:12.912852 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:28:12.912912 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:28:12.912993 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:28:12.913078 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:28:12.913150 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:28:12.913231 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:28:12.913310 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:28:12.913402 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:28:12.913472 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:28:12.913539 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:28:12.913603 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:28:12.913660 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:28:12.913718 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:28:12.913728 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:28:12.913735 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:28:12.913743 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:28:12.913751 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:28:12.913758 kernel: iommu: Default domain type: Translated May 13 00:28:12.913767 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:28:12.913775 kernel: efivars: Registered efivars operations May 13 00:28:12.913782 kernel: vgaarb: loaded May 13 00:28:12.913790 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:28:12.913797 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:28:12.913810 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:28:12.913818 kernel: pnp: PnP ACPI init May 13 00:28:12.913892 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:28:12.913903 kernel: pnp: PnP ACPI: found 1 devices May 13 00:28:12.913913 kernel: NET: Registered PF_INET protocol family May 13 00:28:12.913920 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:28:12.913927 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:28:12.913935 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:28:12.913942 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:28:12.913950 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:28:12.913957 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:28:12.913964 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:28:12.913973 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:28:12.913980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:28:12.913988 kernel: PCI: CLS 0 bytes, default 64 May 13 00:28:12.913995 kernel: kvm [1]: HYP mode not available May 13 00:28:12.914002 kernel: Initialise system trusted keyrings May 13 00:28:12.914009 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:28:12.914017 kernel: Key type asymmetric registered May 13 00:28:12.914024 kernel: Asymmetric key parser 'x509' registered May 13 00:28:12.914031 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:28:12.914038 kernel: io scheduler mq-deadline registered May 13 00:28:12.914047 kernel: io scheduler kyber registered May 13 00:28:12.914054 kernel: io scheduler bfq registered May 13 00:28:12.914061 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:28:12.914069 kernel: ACPI: button: Power Button [PWRB] May 13 00:28:12.914076 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:28:12.914149 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:28:12.914159 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:28:12.914166 kernel: thunder_xcv, ver 1.0 May 13 00:28:12.914174 kernel: thunder_bgx, ver 1.0 May 13 00:28:12.914184 kernel: nicpf, ver 1.0 May 13 00:28:12.914191 kernel: nicvf, ver 1.0 May 13 00:28:12.914275 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:28:12.914372 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:28:12 UTC (1747096092) May 13 00:28:12.914383 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:28:12.914391 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:28:12.914399 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:28:12.914406 kernel: watchdog: Hard watchdog permanently disabled May 13 00:28:12.914419 kernel: NET: Registered PF_INET6 protocol family May 13 00:28:12.914427 kernel: Segment Routing with IPv6 May 13 00:28:12.914434 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:28:12.914442 kernel: NET: Registered PF_PACKET protocol family May 13 00:28:12.914449 kernel: Key type dns_resolver registered May 13 00:28:12.914460 kernel: registered taskstats version 1 May 13 00:28:12.914468 kernel: Loading compiled-in X.509 certificates May 13 00:28:12.914480 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:28:12.914488 kernel: Key type .fscrypt registered May 13 00:28:12.914497 kernel: Key type fscrypt-provisioning registered May 13 00:28:12.914504 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:28:12.914511 kernel: ima: Allocated hash algorithm: sha1 May 13 00:28:12.914518 kernel: ima: No architecture policies found May 13 00:28:12.914525 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:28:12.914533 kernel: clk: Disabling unused clocks May 13 00:28:12.914540 kernel: Freeing unused kernel memory: 39424K May 13 00:28:12.914547 kernel: Run /init as init process May 13 00:28:12.914554 kernel: with arguments: May 13 00:28:12.914563 kernel: /init May 13 00:28:12.914571 kernel: with environment: May 13 00:28:12.914577 kernel: HOME=/ May 13 00:28:12.914585 kernel: TERM=linux May 13 00:28:12.914592 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:28:12.914602 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:28:12.914611 systemd[1]: Detected virtualization kvm. May 13 00:28:12.914621 systemd[1]: Detected architecture arm64. May 13 00:28:12.914628 systemd[1]: Running in initrd. May 13 00:28:12.914636 systemd[1]: No hostname configured, using default hostname. May 13 00:28:12.914644 systemd[1]: Hostname set to . May 13 00:28:12.914652 systemd[1]: Initializing machine ID from VM UUID. May 13 00:28:12.914660 systemd[1]: Queued start job for default target initrd.target. May 13 00:28:12.914668 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:28:12.914676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:28:12.914685 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:28:12.914694 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:28:12.914702 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:28:12.914710 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:28:12.914720 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:28:12.914728 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:28:12.914736 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:28:12.914745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:28:12.914753 systemd[1]: Reached target paths.target - Path Units. May 13 00:28:12.914761 systemd[1]: Reached target slices.target - Slice Units. May 13 00:28:12.914769 systemd[1]: Reached target swap.target - Swaps. May 13 00:28:12.914776 systemd[1]: Reached target timers.target - Timer Units. May 13 00:28:12.914784 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:28:12.914792 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:28:12.914800 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:28:12.914816 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:28:12.914827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:28:12.914835 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:28:12.914842 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:28:12.914850 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:28:12.914858 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:28:12.914866 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:28:12.914874 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:28:12.914882 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:28:12.914891 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:28:12.914900 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:28:12.914908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:28:12.914916 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:28:12.914923 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:28:12.914931 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:28:12.914941 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:28:12.914949 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:28:12.914957 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:28:12.914966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:28:12.914974 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:28:12.915000 systemd-journald[238]: Collecting audit messages is disabled. May 13 00:28:12.915021 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:28:12.915030 systemd-journald[238]: Journal started May 13 00:28:12.915049 systemd-journald[238]: Runtime Journal (/run/log/journal/81c4af3447ed44bf8567cc7cf647a6ee) is 5.9M, max 47.3M, 41.4M free. May 13 00:28:12.900184 systemd-modules-load[239]: Inserted module 'overlay' May 13 00:28:12.917203 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:28:12.918569 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:28:12.920841 kernel: Bridge firewalling registered May 13 00:28:12.918651 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 00:28:12.920122 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:28:12.934488 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:28:12.936073 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:28:12.937531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:28:12.940454 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:28:12.945559 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:28:12.949210 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:28:12.952893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:28:12.956909 dracut-cmdline[272]: dracut-dracut-053 May 13 00:28:12.959524 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:28:12.985659 systemd-resolved[281]: Positive Trust Anchors: May 13 00:28:12.985679 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:28:12.985712 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:28:12.990530 systemd-resolved[281]: Defaulting to hostname 'linux'. May 13 00:28:12.991637 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:28:12.995797 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:28:13.035331 kernel: SCSI subsystem initialized May 13 00:28:13.040318 kernel: Loading iSCSI transport class v2.0-870. May 13 00:28:13.048349 kernel: iscsi: registered transport (tcp) May 13 00:28:13.061336 kernel: iscsi: registered transport (qla4xxx) May 13 00:28:13.061369 kernel: QLogic iSCSI HBA Driver May 13 00:28:13.108385 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:28:13.119494 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:28:13.137493 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:28:13.137565 kernel: device-mapper: uevent: version 1.0.3 May 13 00:28:13.137582 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:28:13.186328 kernel: raid6: neonx8 gen() 15728 MB/s May 13 00:28:13.203317 kernel: raid6: neonx4 gen() 15622 MB/s May 13 00:28:13.220317 kernel: raid6: neonx2 gen() 13186 MB/s May 13 00:28:13.237315 kernel: raid6: neonx1 gen() 10406 MB/s May 13 00:28:13.254315 kernel: raid6: int64x8 gen() 6938 MB/s May 13 00:28:13.271315 kernel: raid6: int64x4 gen() 7343 MB/s May 13 00:28:13.288317 kernel: raid6: int64x2 gen() 6104 MB/s May 13 00:28:13.305339 kernel: raid6: int64x1 gen() 5037 MB/s May 13 00:28:13.305376 kernel: raid6: using algorithm neonx8 gen() 15728 MB/s May 13 00:28:13.322328 kernel: raid6: .... xor() 11887 MB/s, rmw enabled May 13 00:28:13.322342 kernel: raid6: using neon recovery algorithm May 13 00:28:13.327321 kernel: xor: measuring software checksum speed May 13 00:28:13.327340 kernel: 8regs : 18877 MB/sec May 13 00:28:13.328364 kernel: 32regs : 18235 MB/sec May 13 00:28:13.328380 kernel: arm64_neon : 26416 MB/sec May 13 00:28:13.328396 kernel: xor: using function: arm64_neon (26416 MB/sec) May 13 00:28:13.380333 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:28:13.391187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:28:13.410476 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:28:13.421914 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 13 00:28:13.425083 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:28:13.434623 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:28:13.445782 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 13 00:28:13.473178 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:28:13.491464 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:28:13.530954 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:28:13.540466 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:28:13.554594 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:28:13.556603 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:28:13.558637 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:28:13.560833 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:28:13.570453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:28:13.582017 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:28:13.583553 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:28:13.589879 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:28:13.593932 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:28:13.593975 kernel: GPT:9289727 != 19775487 May 13 00:28:13.593986 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:28:13.593996 kernel: GPT:9289727 != 19775487 May 13 00:28:13.594012 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:28:13.594021 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:28:13.592822 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:28:13.592941 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:28:13.595898 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:28:13.597838 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:28:13.597990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:28:13.600241 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:28:13.612669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:28:13.617733 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (505) May 13 00:28:13.617759 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (508) May 13 00:28:13.625001 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:28:13.626499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:28:13.634677 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:28:13.641621 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:28:13.645217 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:28:13.646137 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:28:13.659473 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:28:13.661588 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:28:13.665557 disk-uuid[549]: Primary Header is updated. May 13 00:28:13.665557 disk-uuid[549]: Secondary Entries is updated. May 13 00:28:13.665557 disk-uuid[549]: Secondary Header is updated. May 13 00:28:13.677354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:28:13.680322 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:28:13.682322 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:28:13.685335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:28:14.685375 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:28:14.685431 disk-uuid[551]: The operation has completed successfully. May 13 00:28:14.708169 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:28:14.708270 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:28:14.730502 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:28:14.733651 sh[573]: Success May 13 00:28:14.747322 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:28:14.777323 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:28:14.798726 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:28:14.801094 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:28:14.811019 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:28:14.811058 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:28:14.811069 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:28:14.811079 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:28:14.811578 kernel: BTRFS info (device dm-0): using free space tree May 13 00:28:14.815121 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:28:14.816548 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:28:14.826474 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:28:14.828099 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:28:14.834672 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:28:14.834704 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:28:14.834715 kernel: BTRFS info (device vda6): using free space tree May 13 00:28:14.837336 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:28:14.844448 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:28:14.845950 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:28:14.852257 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:28:14.861517 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:28:14.933335 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:28:14.951539 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:28:14.962019 ignition[663]: Ignition 2.19.0 May 13 00:28:14.962030 ignition[663]: Stage: fetch-offline May 13 00:28:14.962072 ignition[663]: no configs at "/usr/lib/ignition/base.d" May 13 00:28:14.962081 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:28:14.962239 ignition[663]: parsed url from cmdline: "" May 13 00:28:14.962243 ignition[663]: no config URL provided May 13 00:28:14.962247 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:28:14.962255 ignition[663]: no config at "/usr/lib/ignition/user.ign" May 13 00:28:14.962279 ignition[663]: op(1): [started] loading QEMU firmware config module May 13 00:28:14.962283 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:28:14.970519 ignition[663]: op(1): [finished] loading QEMU firmware config module May 13 00:28:14.970544 ignition[663]: QEMU firmware config was not found. Ignoring... May 13 00:28:14.973726 systemd-networkd[765]: lo: Link UP May 13 00:28:14.973737 systemd-networkd[765]: lo: Gained carrier May 13 00:28:14.974536 systemd-networkd[765]: Enumeration completed May 13 00:28:14.974652 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:28:14.974956 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:28:14.974960 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:28:14.975784 systemd-networkd[765]: eth0: Link UP May 13 00:28:14.975788 systemd-networkd[765]: eth0: Gained carrier May 13 00:28:14.975803 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:28:14.976689 systemd[1]: Reached target network.target - Network. May 13 00:28:14.999352 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:28:15.021440 ignition[663]: parsing config with SHA512: 802856f1f4adae6507e5e4db2a59b3fcefc02dcbfa1e75842a1ede98f81afbf1ffd7b58ff210745b95dcfbf9f96fd19891b3da6c8658b590669bc4620a77e44c May 13 00:28:15.027157 unknown[663]: fetched base config from "system" May 13 00:28:15.027172 unknown[663]: fetched user config from "qemu" May 13 00:28:15.027831 ignition[663]: fetch-offline: fetch-offline passed May 13 00:28:15.029557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:28:15.028052 ignition[663]: Ignition finished successfully May 13 00:28:15.031161 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:28:15.039509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:28:15.050240 ignition[771]: Ignition 2.19.0 May 13 00:28:15.050251 ignition[771]: Stage: kargs May 13 00:28:15.050429 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 13 00:28:15.050439 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:28:15.051388 ignition[771]: kargs: kargs passed May 13 00:28:15.054194 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:28:15.051434 ignition[771]: Ignition finished successfully May 13 00:28:15.062539 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:28:15.072604 ignition[780]: Ignition 2.19.0 May 13 00:28:15.072614 ignition[780]: Stage: disks May 13 00:28:15.072810 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 13 00:28:15.072820 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:28:15.075223 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:28:15.073784 ignition[780]: disks: disks passed May 13 00:28:15.077002 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:28:15.073840 ignition[780]: Ignition finished successfully May 13 00:28:15.078625 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:28:15.080248 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:28:15.082111 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:28:15.083680 systemd[1]: Reached target basic.target - Basic System. May 13 00:28:15.096467 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:28:15.106269 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:28:15.110135 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:28:15.112818 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:28:15.156326 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:28:15.156889 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:28:15.158185 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:28:15.169393 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:28:15.171162 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:28:15.172384 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:28:15.172477 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:28:15.172533 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:28:15.182816 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) May 13 00:28:15.182850 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:28:15.182863 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:28:15.182873 kernel: BTRFS info (device vda6): using free space tree May 13 00:28:15.178804 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:28:15.180614 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:28:15.186180 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:28:15.188139 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:28:15.224517 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:28:15.228704 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 13 00:28:15.232472 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:28:15.236254 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:28:15.306168 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:28:15.324400 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:28:15.326908 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:28:15.331318 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:28:15.347392 ignition[911]: INFO : Ignition 2.19.0 May 13 00:28:15.347392 ignition[911]: INFO : Stage: mount May 13 00:28:15.350235 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:28:15.350235 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:28:15.350235 ignition[911]: INFO : mount: mount passed May 13 00:28:15.350235 ignition[911]: INFO : Ignition finished successfully May 13 00:28:15.347918 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:28:15.349873 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:28:15.359440 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:28:15.809439 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:28:15.821511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:28:15.826325 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) May 13 00:28:15.828844 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:28:15.828865 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:28:15.828876 kernel: BTRFS info (device vda6): using free space tree May 13 00:28:15.831332 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:28:15.831898 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:28:15.847645 ignition[942]: INFO : Ignition 2.19.0 May 13 00:28:15.847645 ignition[942]: INFO : Stage: files May 13 00:28:15.849261 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:28:15.849261 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:28:15.849261 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 13 00:28:15.852719 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:28:15.852719 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:28:15.855666 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:28:15.857003 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:28:15.857003 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:28:15.856215 unknown[942]: wrote ssh authorized keys file for user: core May 13 00:28:15.860839 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:28:15.860839 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:28:15.860839 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:28:15.860839 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 13 00:28:15.956199 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 13 00:28:16.120187 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 13 00:28:16.120187 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:28:16.123878 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 13 00:28:16.381482 systemd-networkd[765]: eth0: Gained IPv6LL May 13 00:28:16.482689 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 13 00:28:16.600149 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:28:16.602093 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:28:16.820207 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 13 00:28:17.225959 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:28:17.225959 ignition[942]: INFO : files: op(d): [started] processing unit "containerd.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:28:17.229669 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:28:17.229669 ignition[942]: INFO : files: op(d): [finished] processing unit "containerd.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 13 00:28:17.229669 ignition[942]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:28:17.259476 ignition[942]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:28:17.263493 ignition[942]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:28:17.265132 ignition[942]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:28:17.265132 ignition[942]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 13 00:28:17.265132 ignition[942]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 13 00:28:17.265132 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:28:17.265132 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:28:17.265132 ignition[942]: INFO : files: files passed May 13 00:28:17.265132 ignition[942]: INFO : Ignition finished successfully May 13 00:28:17.266736 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:28:17.279463 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:28:17.282859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:28:17.284290 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:28:17.284401 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:28:17.291754 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:28:17.294715 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:28:17.294715 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:28:17.298334 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:28:17.298805 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:28:17.302633 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:28:17.316254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:28:17.338129 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:28:17.338248 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:28:17.339763 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:28:17.341358 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:28:17.343401 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:28:17.344248 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:28:17.361825 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:28:17.371513 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:28:17.379334 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:28:17.380556 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:28:17.382479 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:28:17.384184 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:28:17.384299 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:28:17.386748 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:28:17.388681 systemd[1]: Stopped target basic.target - Basic System. May 13 00:28:17.390236 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:28:17.391914 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:28:17.393805 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:28:17.395687 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:28:17.397426 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:28:17.399324 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:28:17.401265 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:28:17.403001 systemd[1]: Stopped target swap.target - Swaps. May 13 00:28:17.404463 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:28:17.404591 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:28:17.406909 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:28:17.408860 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:28:17.410711 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:28:17.411499 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:28:17.412715 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:28:17.412849 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:28:17.415532 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:28:17.415649 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:28:17.417483 systemd[1]: Stopped target paths.target - Path Units. May 13 00:28:17.419024 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:28:17.423363 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:28:17.424581 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:28:17.426608 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:28:17.428128 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:28:17.428216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:28:17.429699 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:28:17.429791 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:28:17.431325 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:28:17.431436 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:28:17.433183 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:28:17.433287 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:28:17.450500 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:28:17.451415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:28:17.451548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:28:17.454136 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:28:17.454971 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:28:17.455094 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:28:17.457108 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:28:17.457210 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:28:17.464165 ignition[998]: INFO : Ignition 2.19.0 May 13 00:28:17.464165 ignition[998]: INFO : Stage: umount May 13 00:28:17.464165 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:28:17.464165 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:28:17.462978 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:28:17.473238 ignition[998]: INFO : umount: umount passed May 13 00:28:17.473238 ignition[998]: INFO : Ignition finished successfully May 13 00:28:17.463665 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:28:17.467382 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:28:17.467495 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:28:17.470307 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:28:17.470695 systemd[1]: Stopped target network.target - Network. May 13 00:28:17.472369 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:28:17.472426 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:28:17.474208 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:28:17.474256 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:28:17.475822 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:28:17.475863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:28:17.477466 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:28:17.477510 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:28:17.479580 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:28:17.481153 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:28:17.488567 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:28:17.488692 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:28:17.492053 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:28:17.492104 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:28:17.492431 systemd-networkd[765]: eth0: DHCPv6 lease lost May 13 00:28:17.496736 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:28:17.498378 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:28:17.499649 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:28:17.499681 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:28:17.512431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:28:17.513338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:28:17.513408 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:28:17.515475 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:28:17.515525 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:28:17.517673 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:28:17.517730 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:28:17.520873 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:28:17.530824 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:28:17.530945 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:28:17.533216 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:28:17.533350 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:28:17.535442 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:28:17.535508 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:28:17.537603 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:28:17.537636 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:28:17.539142 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:28:17.539191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:28:17.542414 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:28:17.542463 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:28:17.545858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:28:17.545915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:28:17.560538 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:28:17.561591 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:28:17.561653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:28:17.563962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:28:17.564022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:28:17.566041 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:28:17.566382 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:28:17.567826 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:28:17.567910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:28:17.570210 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:28:17.571514 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:28:17.571584 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:28:17.574118 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:28:17.584129 systemd[1]: Switching root. May 13 00:28:17.605027 systemd-journald[238]: Journal stopped May 13 00:28:18.361110 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 00:28:18.361163 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:28:18.361177 kernel: SELinux: policy capability open_perms=1 May 13 00:28:18.361186 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:28:18.361196 kernel: SELinux: policy capability always_check_network=0 May 13 00:28:18.361206 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:28:18.361219 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:28:18.361229 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:28:18.361238 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:28:18.361249 kernel: audit: type=1403 audit(1747096097.804:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:28:18.361260 systemd[1]: Successfully loaded SELinux policy in 33.427ms. May 13 00:28:18.361277 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.377ms. May 13 00:28:18.361289 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:28:18.361316 systemd[1]: Detected virtualization kvm. May 13 00:28:18.361331 systemd[1]: Detected architecture arm64. May 13 00:28:18.361343 systemd[1]: Detected first boot. May 13 00:28:18.361354 systemd[1]: Initializing machine ID from VM UUID. May 13 00:28:18.361365 zram_generator::config[1063]: No configuration found. May 13 00:28:18.361376 systemd[1]: Populated /etc with preset unit settings. May 13 00:28:18.361387 systemd[1]: Queued start job for default target multi-user.target. May 13 00:28:18.361398 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:28:18.361409 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:28:18.361420 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:28:18.361433 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:28:18.361444 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:28:18.361454 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:28:18.361465 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:28:18.361476 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:28:18.361487 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:28:18.361497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:28:18.361508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:28:18.361519 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:28:18.361531 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:28:18.361543 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:28:18.361553 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:28:18.361568 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:28:18.361579 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:28:18.361590 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:28:18.361601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:28:18.361612 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:28:18.361623 systemd[1]: Reached target slices.target - Slice Units. May 13 00:28:18.361636 systemd[1]: Reached target swap.target - Swaps. May 13 00:28:18.361647 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:28:18.361658 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:28:18.361669 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:28:18.361680 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:28:18.361691 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:28:18.361702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:28:18.361713 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:28:18.361725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:28:18.361736 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:28:18.361746 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:28:18.361757 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:28:18.361768 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:28:18.361786 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:28:18.361799 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:28:18.361812 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:28:18.361823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:28:18.361837 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:28:18.361848 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:28:18.361859 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:28:18.361870 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:28:18.361881 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:28:18.361892 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:28:18.361903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:28:18.361914 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:28:18.361927 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:28:18.361939 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 13 00:28:18.361949 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:28:18.361960 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:28:18.361970 kernel: loop: module loaded May 13 00:28:18.361980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:28:18.361990 kernel: ACPI: bus type drm_connector registered May 13 00:28:18.362000 kernel: fuse: init (API version 7.39) May 13 00:28:18.362010 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:28:18.362022 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:28:18.362033 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:28:18.362050 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:28:18.362061 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:28:18.362075 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:28:18.362086 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:28:18.362098 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:28:18.362127 systemd-journald[1138]: Collecting audit messages is disabled. May 13 00:28:18.362153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:28:18.362166 systemd-journald[1138]: Journal started May 13 00:28:18.362187 systemd-journald[1138]: Runtime Journal (/run/log/journal/81c4af3447ed44bf8567cc7cf647a6ee) is 5.9M, max 47.3M, 41.4M free. May 13 00:28:18.364402 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:28:18.364848 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:28:18.365024 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:28:18.366295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:28:18.366465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:28:18.367551 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:28:18.367705 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:28:18.368863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:28:18.369011 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:28:18.370164 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:28:18.370487 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:28:18.371661 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:28:18.373003 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:28:18.373222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:28:18.374662 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:28:18.375956 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:28:18.377337 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:28:18.387430 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:28:18.393368 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:28:18.395388 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:28:18.396340 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:28:18.399563 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:28:18.403364 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:28:18.404647 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:28:18.407524 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:28:18.408527 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:28:18.410913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:28:18.414088 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:28:18.418795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:28:18.422463 systemd-journald[1138]: Time spent on flushing to /var/log/journal/81c4af3447ed44bf8567cc7cf647a6ee is 12.697ms for 852 entries. May 13 00:28:18.422463 systemd-journald[1138]: System Journal (/var/log/journal/81c4af3447ed44bf8567cc7cf647a6ee) is 8.0M, max 195.6M, 187.6M free. May 13 00:28:18.452068 systemd-journald[1138]: Received client request to flush runtime journal. May 13 00:28:18.422596 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:28:18.424209 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:28:18.425365 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:28:18.428458 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:28:18.440638 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. May 13 00:28:18.440649 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. May 13 00:28:18.441508 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:28:18.445335 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:28:18.449559 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:28:18.451794 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:28:18.453904 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:28:18.458409 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:28:18.474826 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:28:18.485455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:28:18.497502 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 13 00:28:18.497520 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 13 00:28:18.501091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:28:18.850296 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:28:18.866520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:28:18.887340 systemd-udevd[1221]: Using default interface naming scheme 'v255'. May 13 00:28:18.900159 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:28:18.908613 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:28:18.924549 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:28:18.943331 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1226) May 13 00:28:18.949748 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:28:18.966255 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 13 00:28:18.983681 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:28:19.026559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:28:19.040467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:28:19.050407 systemd-networkd[1228]: lo: Link UP May 13 00:28:19.050419 systemd-networkd[1228]: lo: Gained carrier May 13 00:28:19.051083 systemd-networkd[1228]: Enumeration completed May 13 00:28:19.051512 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:28:19.051515 systemd-networkd[1228]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:28:19.052092 systemd-networkd[1228]: eth0: Link UP May 13 00:28:19.052095 systemd-networkd[1228]: eth0: Gained carrier May 13 00:28:19.052107 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:28:19.052585 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:28:19.053587 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:28:19.056084 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:28:19.064299 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:28:19.066014 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:28:19.071354 systemd-networkd[1228]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:28:19.096785 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:28:19.098257 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:28:19.110514 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:28:19.114034 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:28:19.140750 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:28:19.142178 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:28:19.143434 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:28:19.143468 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:28:19.144441 systemd[1]: Reached target machines.target - Containers. May 13 00:28:19.146415 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:28:19.156448 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:28:19.158663 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:28:19.159764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:28:19.160743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:28:19.162993 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:28:19.167540 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:28:19.169686 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:28:19.175590 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:28:19.181351 kernel: loop0: detected capacity change from 0 to 114328 May 13 00:28:19.192838 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:28:19.191190 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:28:19.191932 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:28:19.224323 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:28:19.266320 kernel: loop2: detected capacity change from 0 to 114432 May 13 00:28:19.307322 kernel: loop3: detected capacity change from 0 to 114328 May 13 00:28:19.312322 kernel: loop4: detected capacity change from 0 to 194096 May 13 00:28:19.326367 kernel: loop5: detected capacity change from 0 to 114432 May 13 00:28:19.333746 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:28:19.334160 (sd-merge)[1287]: Merged extensions into '/usr'. May 13 00:28:19.339196 systemd[1]: Reloading requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:28:19.339215 systemd[1]: Reloading... May 13 00:28:19.373855 zram_generator::config[1314]: No configuration found. May 13 00:28:19.463492 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:28:19.473111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:28:19.517399 systemd[1]: Reloading finished in 177 ms. May 13 00:28:19.536421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:28:19.537611 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:28:19.550647 systemd[1]: Starting ensure-sysext.service... May 13 00:28:19.552463 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:28:19.555720 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... May 13 00:28:19.555735 systemd[1]: Reloading... May 13 00:28:19.568707 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:28:19.568981 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:28:19.569738 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:28:19.569965 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. May 13 00:28:19.570015 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. May 13 00:28:19.572440 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:28:19.572453 systemd-tmpfiles[1357]: Skipping /boot May 13 00:28:19.579540 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:28:19.579555 systemd-tmpfiles[1357]: Skipping /boot May 13 00:28:19.606319 zram_generator::config[1385]: No configuration found. May 13 00:28:19.695065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:28:19.737825 systemd[1]: Reloading finished in 181 ms. May 13 00:28:19.752070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:28:19.775716 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:28:19.778096 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:28:19.780348 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:28:19.784546 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:28:19.787539 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:28:19.792001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:28:19.795511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:28:19.799467 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:28:19.806014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:28:19.806942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:28:19.807661 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:28:19.807826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:28:19.810855 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:28:19.811001 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:28:19.812528 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:28:19.812733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:28:19.821901 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:28:19.823945 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:28:19.826901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:28:19.841655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:28:19.846009 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:28:19.850136 augenrules[1463]: No rules May 13 00:28:19.850406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:28:19.851273 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:28:19.853562 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:28:19.856377 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:28:19.858060 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:28:19.859596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:28:19.859748 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:28:19.861169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:28:19.861320 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:28:19.862609 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:28:19.862810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:28:19.868611 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:28:19.872537 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:28:19.875516 systemd-resolved[1431]: Positive Trust Anchors: May 13 00:28:19.875662 systemd-resolved[1431]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:28:19.875695 systemd-resolved[1431]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:28:19.880514 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:28:19.881737 systemd-resolved[1431]: Defaulting to hostname 'linux'. May 13 00:28:19.882370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:28:19.886494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:28:19.888471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:28:19.889368 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:28:19.889437 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:28:19.889758 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:28:19.891264 systemd[1]: Finished ensure-sysext.service. May 13 00:28:19.892356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:28:19.892522 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:28:19.893630 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:28:19.893784 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:28:19.894954 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:28:19.895090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:28:19.896376 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:28:19.896580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:28:19.902057 systemd[1]: Reached target network.target - Network. May 13 00:28:19.903031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:28:19.904168 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:28:19.904247 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:28:19.918483 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:28:19.958517 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:28:19.959319 systemd-timesyncd[1499]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:28:19.959366 systemd-timesyncd[1499]: Initial clock synchronization to Tue 2025-05-13 00:28:20.097991 UTC. May 13 00:28:19.960119 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:28:19.961280 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:28:19.962432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:28:19.963597 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:28:19.964788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:28:19.964823 systemd[1]: Reached target paths.target - Path Units. May 13 00:28:19.965681 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:28:19.966812 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:28:19.967937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:28:19.969108 systemd[1]: Reached target timers.target - Timer Units. May 13 00:28:19.970784 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:28:19.973361 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:28:19.975476 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:28:19.978284 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:28:19.979292 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:28:19.980219 systemd[1]: Reached target basic.target - Basic System. May 13 00:28:19.981341 systemd[1]: System is tainted: cgroupsv1 May 13 00:28:19.981392 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:28:19.981415 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:28:19.982550 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:28:19.984362 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:28:19.986142 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:28:19.990466 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:28:19.991194 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:28:19.992255 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:28:19.998229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 00:28:20.000498 jq[1505]: false May 13 00:28:20.002470 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:28:20.004736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:28:20.007872 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:28:20.009383 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:28:20.013495 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:28:20.018471 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:28:20.020685 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:28:20.020925 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:28:20.022769 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:28:20.022999 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:28:20.029561 dbus-daemon[1504]: [system] SELinux support is enabled May 13 00:28:20.030415 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:28:20.040676 jq[1520]: true May 13 00:28:20.050819 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:28:20.053423 extend-filesystems[1507]: Found loop3 May 13 00:28:20.053423 extend-filesystems[1507]: Found loop4 May 13 00:28:20.053423 extend-filesystems[1507]: Found loop5 May 13 00:28:20.053423 extend-filesystems[1507]: Found vda May 13 00:28:20.053423 extend-filesystems[1507]: Found vda1 May 13 00:28:20.053423 extend-filesystems[1507]: Found vda2 May 13 00:28:20.053423 extend-filesystems[1507]: Found vda3 May 13 00:28:20.053423 extend-filesystems[1507]: Found usr May 13 00:28:20.053423 extend-filesystems[1507]: Found vda4 May 13 00:28:20.053423 extend-filesystems[1507]: Found vda6 May 13 00:28:20.053423 extend-filesystems[1507]: Found vda7 May 13 00:28:20.053423 extend-filesystems[1507]: Found vda9 May 13 00:28:20.053423 extend-filesystems[1507]: Checking size of /dev/vda9 May 13 00:28:20.051069 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:28:20.077117 tar[1522]: linux-arm64/helm May 13 00:28:20.051667 (ntainerd)[1535]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:28:20.077558 jq[1539]: true May 13 00:28:20.058148 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:28:20.058187 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:28:20.062885 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:28:20.062915 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:28:20.086224 update_engine[1518]: I20250513 00:28:20.086004 1518 main.cc:92] Flatcar Update Engine starting May 13 00:28:20.092800 update_engine[1518]: I20250513 00:28:20.092737 1518 update_check_scheduler.cc:74] Next update check in 8m37s May 13 00:28:20.094448 systemd[1]: Started update-engine.service - Update Engine. May 13 00:28:20.096556 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:28:20.101438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1231) May 13 00:28:20.106153 extend-filesystems[1507]: Resized partition /dev/vda9 May 13 00:28:20.106606 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:28:20.126249 extend-filesystems[1558]: resize2fs 1.47.1 (20-May-2024) May 13 00:28:20.137370 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:28:20.125864 systemd-logind[1515]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:28:20.134235 systemd-logind[1515]: New seat seat0. May 13 00:28:20.141430 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:28:20.193094 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:28:20.201958 extend-filesystems[1558]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:28:20.201958 extend-filesystems[1558]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:28:20.201958 extend-filesystems[1558]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:28:20.204656 extend-filesystems[1507]: Resized filesystem in /dev/vda9 May 13 00:28:20.205992 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:28:20.206261 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:28:20.212240 bash[1565]: Updated "/home/core/.ssh/authorized_keys" May 13 00:28:20.213664 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:28:20.215847 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:28:20.221899 locksmithd[1555]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:28:20.301057 containerd[1535]: time="2025-05-13T00:28:20.300925369Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:28:20.329125 containerd[1535]: time="2025-05-13T00:28:20.329065680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.330973 containerd[1535]: time="2025-05-13T00:28:20.330937395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:28:20.331057 containerd[1535]: time="2025-05-13T00:28:20.331044338Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:28:20.331157 containerd[1535]: time="2025-05-13T00:28:20.331141760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:28:20.331464 containerd[1535]: time="2025-05-13T00:28:20.331443993Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:28:20.331593 containerd[1535]: time="2025-05-13T00:28:20.331576208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.331798 containerd[1535]: time="2025-05-13T00:28:20.331710619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:28:20.331798 containerd[1535]: time="2025-05-13T00:28:20.331737071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.332189 containerd[1535]: time="2025-05-13T00:28:20.332167368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:28:20.332189 containerd[1535]: time="2025-05-13T00:28:20.332238663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.332189 containerd[1535]: time="2025-05-13T00:28:20.332258074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:28:20.332189 containerd[1535]: time="2025-05-13T00:28:20.332268818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.332189 containerd[1535]: time="2025-05-13T00:28:20.332387806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.332189 containerd[1535]: time="2025-05-13T00:28:20.332597421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:28:20.333026 containerd[1535]: time="2025-05-13T00:28:20.333004034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:28:20.333154 containerd[1535]: time="2025-05-13T00:28:20.333137673Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:28:20.333349 containerd[1535]: time="2025-05-13T00:28:20.333332108Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:28:20.333519 containerd[1535]: time="2025-05-13T00:28:20.333501598Z" level=info msg="metadata content store policy set" policy=shared May 13 00:28:20.339459 containerd[1535]: time="2025-05-13T00:28:20.339398142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:28:20.339459 containerd[1535]: time="2025-05-13T00:28:20.339447504Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:28:20.339566 containerd[1535]: time="2025-05-13T00:28:20.339470537Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:28:20.339566 containerd[1535]: time="2025-05-13T00:28:20.339489052Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:28:20.339566 containerd[1535]: time="2025-05-13T00:28:20.339506347Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:28:20.339878 containerd[1535]: time="2025-05-13T00:28:20.339640637Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:28:20.340225 containerd[1535]: time="2025-05-13T00:28:20.340198265Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:28:20.340350 containerd[1535]: time="2025-05-13T00:28:20.340333044Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:28:20.340375 containerd[1535]: time="2025-05-13T00:28:20.340354815Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:28:20.340375 containerd[1535]: time="2025-05-13T00:28:20.340369627Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:28:20.340424 containerd[1535]: time="2025-05-13T00:28:20.340384277Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340424 containerd[1535]: time="2025-05-13T00:28:20.340399171Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340424 containerd[1535]: time="2025-05-13T00:28:20.340413292Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340473 containerd[1535]: time="2025-05-13T00:28:20.340428674Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340473 containerd[1535]: time="2025-05-13T00:28:20.340443934Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340473 containerd[1535]: time="2025-05-13T00:28:20.340456590Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340473 containerd[1535]: time="2025-05-13T00:28:20.340468554Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340546 containerd[1535]: time="2025-05-13T00:28:20.340481088Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:28:20.340546 containerd[1535]: time="2025-05-13T00:28:20.340501476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340546 containerd[1535]: time="2025-05-13T00:28:20.340522596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340546 containerd[1535]: time="2025-05-13T00:28:20.340535577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340630 containerd[1535]: time="2025-05-13T00:28:20.340549779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340630 containerd[1535]: time="2025-05-13T00:28:20.340563290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340630 containerd[1535]: time="2025-05-13T00:28:20.340592996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340630 containerd[1535]: time="2025-05-13T00:28:20.340607727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340630 containerd[1535]: time="2025-05-13T00:28:20.340621563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340716 containerd[1535]: time="2025-05-13T00:28:20.340634016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340716 containerd[1535]: time="2025-05-13T00:28:20.340648910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340716 containerd[1535]: time="2025-05-13T00:28:20.340660548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340716 containerd[1535]: time="2025-05-13T00:28:20.340673082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340716 containerd[1535]: time="2025-05-13T00:28:20.340689115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340716 containerd[1535]: time="2025-05-13T00:28:20.340710520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:28:20.340816 containerd[1535]: time="2025-05-13T00:28:20.340732210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340816 containerd[1535]: time="2025-05-13T00:28:20.340745558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340816 containerd[1535]: time="2025-05-13T00:28:20.340756748Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340910205Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340930471Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340942801Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340955579Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340965182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340977757Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:28:20.340986 containerd[1535]: time="2025-05-13T00:28:20.340987645Z" level=info msg="NRI interface is disabled by configuration." May 13 00:28:20.341133 containerd[1535]: time="2025-05-13T00:28:20.340998185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:28:20.341471 containerd[1535]: time="2025-05-13T00:28:20.341411839Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:28:20.341471 containerd[1535]: time="2025-05-13T00:28:20.341474222Z" level=info msg="Connect containerd service" May 13 00:28:20.341640 containerd[1535]: time="2025-05-13T00:28:20.341575469Z" level=info msg="using legacy CRI server" May 13 00:28:20.341640 containerd[1535]: time="2025-05-13T00:28:20.341582753Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:28:20.345076 containerd[1535]: time="2025-05-13T00:28:20.345042384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:28:20.345992 containerd[1535]: time="2025-05-13T00:28:20.345791355Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:28:20.346582 containerd[1535]: time="2025-05-13T00:28:20.346547285Z" level=info msg="Start subscribing containerd event" May 13 00:28:20.346633 containerd[1535]: time="2025-05-13T00:28:20.346601326Z" level=info msg="Start recovering state" May 13 00:28:20.346685 containerd[1535]: time="2025-05-13T00:28:20.346662245Z" level=info msg="Start event monitor" May 13 00:28:20.346713 containerd[1535]: time="2025-05-13T00:28:20.346685278Z" level=info msg="Start snapshots syncer" May 13 00:28:20.346713 containerd[1535]: time="2025-05-13T00:28:20.346695004Z" level=info msg="Start cni network conf syncer for default" May 13 00:28:20.346713 containerd[1535]: time="2025-05-13T00:28:20.346708229Z" level=info msg="Start streaming server" May 13 00:28:20.348647 containerd[1535]: time="2025-05-13T00:28:20.347511933Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:28:20.348647 containerd[1535]: time="2025-05-13T00:28:20.347582659Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:28:20.347780 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:28:20.348903 containerd[1535]: time="2025-05-13T00:28:20.348846408Z" level=info msg="containerd successfully booted in 0.049332s" May 13 00:28:20.459646 tar[1522]: linux-arm64/LICENSE May 13 00:28:20.459646 tar[1522]: linux-arm64/README.md May 13 00:28:20.476115 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 00:28:20.499712 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:28:20.519349 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:28:20.532643 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:28:20.538632 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:28:20.538890 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:28:20.541842 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:28:20.554552 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:28:20.568655 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:28:20.571139 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:28:20.572465 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:28:20.927150 systemd-networkd[1228]: eth0: Gained IPv6LL May 13 00:28:20.929508 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:28:20.931455 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:28:20.951597 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:28:20.954354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:20.956494 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:28:20.977283 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:28:20.977555 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:28:20.979821 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:28:20.995111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:28:21.478301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:21.480673 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:28:21.485650 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:28:21.487432 systemd[1]: Startup finished in 5.644s (kernel) + 3.716s (userspace) = 9.361s. May 13 00:28:22.055202 kubelet[1641]: E0513 00:28:22.055134 1641 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:28:22.057507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:28:22.057843 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:28:25.617384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:28:25.630555 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:51856.service - OpenSSH per-connection server daemon (10.0.0.1:51856). May 13 00:28:25.683376 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 51856 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:25.684161 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:25.699809 systemd-logind[1515]: New session 1 of user core. May 13 00:28:25.700696 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:28:25.716547 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:28:25.731526 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:28:25.743761 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:28:25.746703 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:28:25.827756 systemd[1662]: Queued start job for default target default.target. May 13 00:28:25.830127 systemd[1662]: Created slice app.slice - User Application Slice. May 13 00:28:25.830596 systemd[1662]: Reached target paths.target - Paths. May 13 00:28:25.830726 systemd[1662]: Reached target timers.target - Timers. May 13 00:28:25.838461 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:28:25.847055 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:28:25.847357 systemd[1662]: Reached target sockets.target - Sockets. May 13 00:28:25.847373 systemd[1662]: Reached target basic.target - Basic System. May 13 00:28:25.847420 systemd[1662]: Reached target default.target - Main User Target. May 13 00:28:25.847446 systemd[1662]: Startup finished in 94ms. May 13 00:28:25.847494 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:28:25.848602 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:28:25.915003 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:51860.service - OpenSSH per-connection server daemon (10.0.0.1:51860). May 13 00:28:25.959361 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 51860 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:25.960644 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:25.965124 systemd-logind[1515]: New session 2 of user core. May 13 00:28:25.975575 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:28:26.030592 sshd[1674]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.040619 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:51872.service - OpenSSH per-connection server daemon (10.0.0.1:51872). May 13 00:28:26.041018 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:51860.service: Deactivated successfully. May 13 00:28:26.045759 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:28:26.046131 systemd-logind[1515]: Session 2 logged out. Waiting for processes to exit. May 13 00:28:26.047891 systemd-logind[1515]: Removed session 2. May 13 00:28:26.079476 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 51872 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:26.080749 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:26.085049 systemd-logind[1515]: New session 3 of user core. May 13 00:28:26.094655 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:28:26.147029 sshd[1679]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.149801 systemd-logind[1515]: Session 3 logged out. Waiting for processes to exit. May 13 00:28:26.149905 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:51872.service: Deactivated successfully. May 13 00:28:26.153280 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:28:26.158610 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:51876.service - OpenSSH per-connection server daemon (10.0.0.1:51876). May 13 00:28:26.158974 systemd-logind[1515]: Removed session 3. May 13 00:28:26.196573 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:26.197932 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:26.201546 systemd-logind[1515]: New session 4 of user core. May 13 00:28:26.212619 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:28:26.268107 sshd[1690]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.278693 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:51888.service - OpenSSH per-connection server daemon (10.0.0.1:51888). May 13 00:28:26.279263 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:51876.service: Deactivated successfully. May 13 00:28:26.280830 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:28:26.281501 systemd-logind[1515]: Session 4 logged out. Waiting for processes to exit. May 13 00:28:26.283437 systemd-logind[1515]: Removed session 4. May 13 00:28:26.318450 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 51888 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:26.319828 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:26.325515 systemd-logind[1515]: New session 5 of user core. May 13 00:28:26.333615 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:28:26.398037 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:28:26.398357 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:28:26.412186 sudo[1702]: pam_unix(sudo:session): session closed for user root May 13 00:28:26.418804 sshd[1695]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.429566 systemd[1]: Started sshd@5-10.0.0.98:22-10.0.0.1:51900.service - OpenSSH per-connection server daemon (10.0.0.1:51900). May 13 00:28:26.429971 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:51888.service: Deactivated successfully. May 13 00:28:26.432460 systemd-logind[1515]: Session 5 logged out. Waiting for processes to exit. May 13 00:28:26.433180 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:28:26.434498 systemd-logind[1515]: Removed session 5. May 13 00:28:26.471424 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 51900 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:26.472904 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:26.477375 systemd-logind[1515]: New session 6 of user core. May 13 00:28:26.487582 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:28:26.544441 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:28:26.544712 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:28:26.548171 sudo[1712]: pam_unix(sudo:session): session closed for user root May 13 00:28:26.553650 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:28:26.553970 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:28:26.574593 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:28:26.577375 auditctl[1715]: No rules May 13 00:28:26.577767 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:28:26.578015 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:28:26.580613 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:28:26.607046 augenrules[1734]: No rules May 13 00:28:26.607753 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:28:26.609823 sudo[1711]: pam_unix(sudo:session): session closed for user root May 13 00:28:26.611752 sshd[1704]: pam_unix(sshd:session): session closed for user core May 13 00:28:26.624634 systemd[1]: Started sshd@6-10.0.0.98:22-10.0.0.1:51904.service - OpenSSH per-connection server daemon (10.0.0.1:51904). May 13 00:28:26.625092 systemd[1]: sshd@5-10.0.0.98:22-10.0.0.1:51900.service: Deactivated successfully. May 13 00:28:26.626734 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:28:26.632500 systemd-logind[1515]: Session 6 logged out. Waiting for processes to exit. May 13 00:28:26.633729 systemd-logind[1515]: Removed session 6. May 13 00:28:26.667258 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 51904 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:28:26.668634 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:28:26.674023 systemd-logind[1515]: New session 7 of user core. May 13 00:28:26.681629 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:28:26.737024 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:28:26.740077 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:28:27.060580 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 00:28:27.060848 (dockerd)[1767]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 00:28:27.388480 dockerd[1767]: time="2025-05-13T00:28:27.388353709Z" level=info msg="Starting up" May 13 00:28:27.595865 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2188843115-merged.mount: Deactivated successfully. May 13 00:28:27.776389 dockerd[1767]: time="2025-05-13T00:28:27.776272997Z" level=info msg="Loading containers: start." May 13 00:28:27.886043 kernel: Initializing XFRM netlink socket May 13 00:28:27.979053 systemd-networkd[1228]: docker0: Link UP May 13 00:28:27.997752 dockerd[1767]: time="2025-05-13T00:28:27.997687945Z" level=info msg="Loading containers: done." May 13 00:28:28.016371 dockerd[1767]: time="2025-05-13T00:28:28.016285308Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 00:28:28.016619 dockerd[1767]: time="2025-05-13T00:28:28.016448354Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 13 00:28:28.016619 dockerd[1767]: time="2025-05-13T00:28:28.016572651Z" level=info msg="Daemon has completed initialization" May 13 00:28:28.049948 dockerd[1767]: time="2025-05-13T00:28:28.049122921Z" level=info msg="API listen on /run/docker.sock" May 13 00:28:28.049473 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 00:28:28.712093 containerd[1535]: time="2025-05-13T00:28:28.712004707Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 00:28:29.328509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3287552224.mount: Deactivated successfully. May 13 00:28:30.334584 containerd[1535]: time="2025-05-13T00:28:30.334480140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:30.335708 containerd[1535]: time="2025-05-13T00:28:30.335660944Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 13 00:28:30.336680 containerd[1535]: time="2025-05-13T00:28:30.336644533Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:30.340316 containerd[1535]: time="2025-05-13T00:28:30.340266949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:30.344323 containerd[1535]: time="2025-05-13T00:28:30.342925547Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.630876195s" May 13 00:28:30.344323 containerd[1535]: time="2025-05-13T00:28:30.342972801Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 13 00:28:30.363257 containerd[1535]: time="2025-05-13T00:28:30.363213392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 00:28:31.554062 containerd[1535]: time="2025-05-13T00:28:31.553993396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:31.555164 containerd[1535]: time="2025-05-13T00:28:31.554939680Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 13 00:28:31.555938 containerd[1535]: time="2025-05-13T00:28:31.555900461Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:31.558929 containerd[1535]: time="2025-05-13T00:28:31.558866337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:31.560453 containerd[1535]: time="2025-05-13T00:28:31.560359677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.197101165s" May 13 00:28:31.560453 containerd[1535]: time="2025-05-13T00:28:31.560404013Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 13 00:28:31.579835 containerd[1535]: time="2025-05-13T00:28:31.579754817Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 00:28:32.308026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 00:28:32.325555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:32.421938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:32.426460 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:28:32.475631 kubelet[2005]: E0513 00:28:32.475562 2005 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:28:32.479128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:28:32.479354 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:28:32.634449 containerd[1535]: time="2025-05-13T00:28:32.634331166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:32.635492 containerd[1535]: time="2025-05-13T00:28:32.635456163Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 13 00:28:32.636356 containerd[1535]: time="2025-05-13T00:28:32.636326754Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:32.639405 containerd[1535]: time="2025-05-13T00:28:32.639355540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:32.640568 containerd[1535]: time="2025-05-13T00:28:32.640541188Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.060722523s" May 13 00:28:32.640734 containerd[1535]: time="2025-05-13T00:28:32.640643866Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 13 00:28:32.659297 containerd[1535]: time="2025-05-13T00:28:32.659242142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:28:33.651062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4035527195.mount: Deactivated successfully. May 13 00:28:33.980288 containerd[1535]: time="2025-05-13T00:28:33.980235010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:33.980905 containerd[1535]: time="2025-05-13T00:28:33.980869785Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 00:28:33.981554 containerd[1535]: time="2025-05-13T00:28:33.981526586Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:33.983559 containerd[1535]: time="2025-05-13T00:28:33.983531377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:33.984246 containerd[1535]: time="2025-05-13T00:28:33.984108656Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.324824973s" May 13 00:28:33.984246 containerd[1535]: time="2025-05-13T00:28:33.984139149Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:28:34.002033 containerd[1535]: time="2025-05-13T00:28:34.001999282Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 00:28:34.959101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2609831220.mount: Deactivated successfully. May 13 00:28:35.516898 containerd[1535]: time="2025-05-13T00:28:35.516853284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:35.517765 containerd[1535]: time="2025-05-13T00:28:35.517332764Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 13 00:28:35.518696 containerd[1535]: time="2025-05-13T00:28:35.518618286Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:35.521703 containerd[1535]: time="2025-05-13T00:28:35.521652812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:35.523273 containerd[1535]: time="2025-05-13T00:28:35.523229253Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.521191752s" May 13 00:28:35.523273 containerd[1535]: time="2025-05-13T00:28:35.523269467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 13 00:28:35.540864 containerd[1535]: time="2025-05-13T00:28:35.540823380Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 00:28:35.974079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756254085.mount: Deactivated successfully. May 13 00:28:35.977872 containerd[1535]: time="2025-05-13T00:28:35.977826974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:35.979005 containerd[1535]: time="2025-05-13T00:28:35.978973371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 13 00:28:35.979748 containerd[1535]: time="2025-05-13T00:28:35.979716947Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:35.982289 containerd[1535]: time="2025-05-13T00:28:35.982253471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:35.983407 containerd[1535]: time="2025-05-13T00:28:35.983361859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 442.501032ms" May 13 00:28:35.983450 containerd[1535]: time="2025-05-13T00:28:35.983406884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 13 00:28:36.003628 containerd[1535]: time="2025-05-13T00:28:36.003590313Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 00:28:36.517277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967236681.mount: Deactivated successfully. May 13 00:28:38.216533 containerd[1535]: time="2025-05-13T00:28:38.216458075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:38.217385 containerd[1535]: time="2025-05-13T00:28:38.217336610Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 13 00:28:38.219961 containerd[1535]: time="2025-05-13T00:28:38.219882316Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:38.225067 containerd[1535]: time="2025-05-13T00:28:38.225028733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:28:38.227626 containerd[1535]: time="2025-05-13T00:28:38.227471758Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.223841008s" May 13 00:28:38.227626 containerd[1535]: time="2025-05-13T00:28:38.227532894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 13 00:28:42.658723 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 00:28:42.668463 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:42.831565 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:42.836627 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:28:42.873142 kubelet[2232]: E0513 00:28:42.873079 2232 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:28:42.875338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:28:42.875478 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:28:44.018184 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:44.037519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:44.050376 systemd[1]: Reloading requested from client PID 2250 ('systemctl') (unit session-7.scope)... May 13 00:28:44.050392 systemd[1]: Reloading... May 13 00:28:44.105330 zram_generator::config[2292]: No configuration found. May 13 00:28:44.218026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:28:44.270547 systemd[1]: Reloading finished in 219 ms. May 13 00:28:44.300411 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:28:44.300530 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:28:44.300831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:44.313609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:44.401982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:44.406595 (kubelet)[2347]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:28:44.450222 kubelet[2347]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:44.450222 kubelet[2347]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:28:44.450222 kubelet[2347]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:44.451021 kubelet[2347]: I0513 00:28:44.450970 2347 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:28:45.721013 kubelet[2347]: I0513 00:28:45.720970 2347 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:28:45.721013 kubelet[2347]: I0513 00:28:45.721003 2347 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:28:45.721400 kubelet[2347]: I0513 00:28:45.721220 2347 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:28:45.747370 kubelet[2347]: E0513 00:28:45.747342 2347 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.747370 kubelet[2347]: I0513 00:28:45.747352 2347 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:28:45.759570 kubelet[2347]: I0513 00:28:45.759477 2347 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:28:45.761813 kubelet[2347]: I0513 00:28:45.761740 2347 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:28:45.761991 kubelet[2347]: I0513 00:28:45.761795 2347 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:28:45.762103 kubelet[2347]: I0513 00:28:45.762035 2347 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:28:45.762103 kubelet[2347]: I0513 00:28:45.762046 2347 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:28:45.762268 kubelet[2347]: I0513 00:28:45.762242 2347 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:45.765171 kubelet[2347]: I0513 00:28:45.765131 2347 kubelet.go:400] "Attempting to sync node with API server" May 13 00:28:45.765171 kubelet[2347]: I0513 00:28:45.765161 2347 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:28:45.765381 kubelet[2347]: I0513 00:28:45.765357 2347 kubelet.go:312] "Adding apiserver pod source" May 13 00:28:45.765985 kubelet[2347]: I0513 00:28:45.765477 2347 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:28:45.765985 kubelet[2347]: W0513 00:28:45.765757 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.765985 kubelet[2347]: E0513 00:28:45.765812 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.765985 kubelet[2347]: W0513 00:28:45.765878 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.765985 kubelet[2347]: E0513 00:28:45.765920 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.766538 kubelet[2347]: I0513 00:28:45.766521 2347 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:28:45.766866 kubelet[2347]: I0513 00:28:45.766852 2347 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:28:45.766964 kubelet[2347]: W0513 00:28:45.766954 2347 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:28:45.767737 kubelet[2347]: I0513 00:28:45.767715 2347 server.go:1264] "Started kubelet" May 13 00:28:45.770790 kubelet[2347]: I0513 00:28:45.770232 2347 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:28:45.770917 kubelet[2347]: E0513 00:28:45.770439 2347 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183eeeab7cd82c00 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-13 00:28:45.767691264 +0000 UTC m=+1.358109295,LastTimestamp:2025-05-13 00:28:45.767691264 +0000 UTC m=+1.358109295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 13 00:28:45.771796 kubelet[2347]: I0513 00:28:45.771735 2347 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:28:45.772042 kubelet[2347]: I0513 00:28:45.771974 2347 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:28:45.772103 kubelet[2347]: I0513 00:28:45.772075 2347 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:28:45.772955 kubelet[2347]: E0513 00:28:45.772937 2347 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:28:45.773046 kubelet[2347]: I0513 00:28:45.772942 2347 server.go:455] "Adding debug handlers to kubelet server" May 13 00:28:45.773371 kubelet[2347]: W0513 00:28:45.773326 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.773423 kubelet[2347]: E0513 00:28:45.773376 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.773423 kubelet[2347]: I0513 00:28:45.773024 2347 reconciler.go:26] "Reconciler: start to sync state" May 13 00:28:45.773605 kubelet[2347]: I0513 00:28:45.772984 2347 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:28:45.773896 kubelet[2347]: E0513 00:28:45.773607 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="200ms" May 13 00:28:45.773896 kubelet[2347]: I0513 00:28:45.773779 2347 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:28:45.774765 kubelet[2347]: I0513 00:28:45.774741 2347 factory.go:221] Registration of the systemd container factory successfully May 13 00:28:45.774852 kubelet[2347]: I0513 00:28:45.774831 2347 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:28:45.776474 kubelet[2347]: I0513 00:28:45.776454 2347 factory.go:221] Registration of the containerd container factory successfully May 13 00:28:45.788686 kubelet[2347]: I0513 00:28:45.787971 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:28:45.789357 kubelet[2347]: I0513 00:28:45.789332 2347 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:28:45.789508 kubelet[2347]: I0513 00:28:45.789494 2347 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:28:45.789541 kubelet[2347]: I0513 00:28:45.789516 2347 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:28:45.789574 kubelet[2347]: E0513 00:28:45.789563 2347 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:28:45.791352 kubelet[2347]: W0513 00:28:45.791282 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.791428 kubelet[2347]: E0513 00:28:45.791357 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:45.798786 kubelet[2347]: I0513 00:28:45.798755 2347 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:28:45.798786 kubelet[2347]: I0513 00:28:45.798773 2347 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:28:45.798786 kubelet[2347]: I0513 00:28:45.798791 2347 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:45.864908 kubelet[2347]: I0513 00:28:45.864871 2347 policy_none.go:49] "None policy: Start" May 13 00:28:45.866048 kubelet[2347]: I0513 00:28:45.865867 2347 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:28:45.866048 kubelet[2347]: I0513 00:28:45.865895 2347 state_mem.go:35] "Initializing new in-memory state store" May 13 00:28:45.871868 kubelet[2347]: I0513 00:28:45.871099 2347 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:28:45.871868 kubelet[2347]: I0513 00:28:45.871289 2347 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:28:45.871868 kubelet[2347]: I0513 00:28:45.871398 2347 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:28:45.872879 kubelet[2347]: E0513 00:28:45.872855 2347 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 13 00:28:45.873501 kubelet[2347]: I0513 00:28:45.873472 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:45.873947 kubelet[2347]: E0513 00:28:45.873921 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 13 00:28:45.890271 kubelet[2347]: I0513 00:28:45.890201 2347 topology_manager.go:215] "Topology Admit Handler" podUID="87055c1873472f16a64cf579cafd6eaa" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:28:45.891311 kubelet[2347]: I0513 00:28:45.891270 2347 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:28:45.893253 kubelet[2347]: I0513 00:28:45.893221 2347 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:28:45.973770 kubelet[2347]: I0513 00:28:45.973655 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:45.973770 kubelet[2347]: I0513 00:28:45.973697 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:28:45.973770 kubelet[2347]: I0513 00:28:45.973719 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:45.973770 kubelet[2347]: I0513 00:28:45.973736 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:45.973770 kubelet[2347]: I0513 00:28:45.973763 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:45.973951 kubelet[2347]: I0513 00:28:45.973781 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87055c1873472f16a64cf579cafd6eaa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87055c1873472f16a64cf579cafd6eaa\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:45.973951 kubelet[2347]: I0513 00:28:45.973802 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87055c1873472f16a64cf579cafd6eaa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87055c1873472f16a64cf579cafd6eaa\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:45.973951 kubelet[2347]: I0513 00:28:45.973817 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87055c1873472f16a64cf579cafd6eaa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87055c1873472f16a64cf579cafd6eaa\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:45.973951 kubelet[2347]: I0513 00:28:45.973840 2347 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:45.974738 kubelet[2347]: E0513 00:28:45.974528 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="400ms" May 13 00:28:46.076344 kubelet[2347]: I0513 00:28:46.075999 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:46.076444 kubelet[2347]: E0513 00:28:46.076347 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 13 00:28:46.195440 kubelet[2347]: E0513 00:28:46.195403 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:46.196081 containerd[1535]: time="2025-05-13T00:28:46.195995026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87055c1873472f16a64cf579cafd6eaa,Namespace:kube-system,Attempt:0,}" May 13 00:28:46.197195 kubelet[2347]: E0513 00:28:46.197176 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:46.197726 containerd[1535]: time="2025-05-13T00:28:46.197481907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 13 00:28:46.199121 kubelet[2347]: E0513 00:28:46.199093 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:46.199426 containerd[1535]: time="2025-05-13T00:28:46.199400180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 13 00:28:46.375470 kubelet[2347]: E0513 00:28:46.375349 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="800ms" May 13 00:28:46.477712 kubelet[2347]: I0513 00:28:46.477682 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:46.478055 kubelet[2347]: E0513 00:28:46.478017 2347 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.98:6443/api/v1/nodes\": dial tcp 10.0.0.98:6443: connect: connection refused" node="localhost" May 13 00:28:46.652062 kubelet[2347]: W0513 00:28:46.651948 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:46.652062 kubelet[2347]: E0513 00:28:46.651992 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:46.680059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035268857.mount: Deactivated successfully. May 13 00:28:46.683931 containerd[1535]: time="2025-05-13T00:28:46.683889608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:46.684997 containerd[1535]: time="2025-05-13T00:28:46.684973152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:28:46.686658 containerd[1535]: time="2025-05-13T00:28:46.686614436Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:46.688167 containerd[1535]: time="2025-05-13T00:28:46.688111082Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:46.688728 containerd[1535]: time="2025-05-13T00:28:46.688589619Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:28:46.689334 containerd[1535]: time="2025-05-13T00:28:46.689288156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:46.690179 containerd[1535]: time="2025-05-13T00:28:46.690126927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:28:46.691204 containerd[1535]: time="2025-05-13T00:28:46.691158243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:28:46.694174 containerd[1535]: time="2025-05-13T00:28:46.694143770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.605793ms" May 13 00:28:46.696118 containerd[1535]: time="2025-05-13T00:28:46.695915885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.845618ms" May 13 00:28:46.696478 containerd[1535]: time="2025-05-13T00:28:46.696453534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.002046ms" May 13 00:28:46.786321 kubelet[2347]: W0513 00:28:46.786252 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:46.786642 kubelet[2347]: E0513 00:28:46.786500 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:46.834800 containerd[1535]: time="2025-05-13T00:28:46.834645634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:46.834800 containerd[1535]: time="2025-05-13T00:28:46.834716032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:46.834800 containerd[1535]: time="2025-05-13T00:28:46.834731600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:46.834964 containerd[1535]: time="2025-05-13T00:28:46.834891046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:46.838454 containerd[1535]: time="2025-05-13T00:28:46.837776920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:46.838576 containerd[1535]: time="2025-05-13T00:28:46.838478658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:46.838576 containerd[1535]: time="2025-05-13T00:28:46.838499029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:46.838650 containerd[1535]: time="2025-05-13T00:28:46.838589918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:46.839332 containerd[1535]: time="2025-05-13T00:28:46.839217816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:28:46.839474 containerd[1535]: time="2025-05-13T00:28:46.839291656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:28:46.839474 containerd[1535]: time="2025-05-13T00:28:46.839330677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:46.839836 containerd[1535]: time="2025-05-13T00:28:46.839644286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:28:46.883267 containerd[1535]: time="2025-05-13T00:28:46.883214669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c74d2e43c57a01c51305498dd18119b8109388e48c2f5c47e0fe1e75a87bcdc\"" May 13 00:28:46.889930 containerd[1535]: time="2025-05-13T00:28:46.889717771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9665e179aa6d2c2bfcf957df8a197ebba9026ac7efa56b010ad5bd046bebdff\"" May 13 00:28:46.890267 kubelet[2347]: E0513 00:28:46.890140 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:46.890667 kubelet[2347]: E0513 00:28:46.890445 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:46.890745 containerd[1535]: time="2025-05-13T00:28:46.890728596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:87055c1873472f16a64cf579cafd6eaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fd9cb405232bc232409604b3aee5f4e8c297c2be3c35e60359775160474908f\"" May 13 00:28:46.891233 kubelet[2347]: E0513 00:28:46.891115 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:46.892759 containerd[1535]: time="2025-05-13T00:28:46.892648950Z" level=info msg="CreateContainer within sandbox \"5c74d2e43c57a01c51305498dd18119b8109388e48c2f5c47e0fe1e75a87bcdc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 13 00:28:46.893761 containerd[1535]: time="2025-05-13T00:28:46.893737096Z" level=info msg="CreateContainer within sandbox \"e9665e179aa6d2c2bfcf957df8a197ebba9026ac7efa56b010ad5bd046bebdff\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 13 00:28:46.894141 containerd[1535]: time="2025-05-13T00:28:46.894116300Z" level=info msg="CreateContainer within sandbox \"9fd9cb405232bc232409604b3aee5f4e8c297c2be3c35e60359775160474908f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 13 00:28:46.907764 containerd[1535]: time="2025-05-13T00:28:46.907670039Z" level=info msg="CreateContainer within sandbox \"5c74d2e43c57a01c51305498dd18119b8109388e48c2f5c47e0fe1e75a87bcdc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f17fa112068b6ea5d6379d9b71263056d6bea8d4339c2d6af5200972963115d\"" May 13 00:28:46.908218 containerd[1535]: time="2025-05-13T00:28:46.908191800Z" level=info msg="StartContainer for \"6f17fa112068b6ea5d6379d9b71263056d6bea8d4339c2d6af5200972963115d\"" May 13 00:28:46.912127 containerd[1535]: time="2025-05-13T00:28:46.912032388Z" level=info msg="CreateContainer within sandbox \"e9665e179aa6d2c2bfcf957df8a197ebba9026ac7efa56b010ad5bd046bebdff\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"180348b374cd0b6787dcdf61a4e2ddacc6de4e38f7249239ca27fc6b8c17d73c\"" May 13 00:28:46.913099 containerd[1535]: time="2025-05-13T00:28:46.912497839Z" level=info msg="StartContainer for \"180348b374cd0b6787dcdf61a4e2ddacc6de4e38f7249239ca27fc6b8c17d73c\"" May 13 00:28:46.915745 containerd[1535]: time="2025-05-13T00:28:46.915703765Z" level=info msg="CreateContainer within sandbox \"9fd9cb405232bc232409604b3aee5f4e8c297c2be3c35e60359775160474908f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ea3731c76c9855941c1e9916bfc37bc08a9f981e5694b7e6280d5fe96ae610b3\"" May 13 00:28:46.916341 containerd[1535]: time="2025-05-13T00:28:46.916175580Z" level=info msg="StartContainer for \"ea3731c76c9855941c1e9916bfc37bc08a9f981e5694b7e6280d5fe96ae610b3\"" May 13 00:28:46.961218 kubelet[2347]: W0513 00:28:46.959597 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:46.961218 kubelet[2347]: E0513 00:28:46.959659 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:46.969343 containerd[1535]: time="2025-05-13T00:28:46.969184166Z" level=info msg="StartContainer for \"6f17fa112068b6ea5d6379d9b71263056d6bea8d4339c2d6af5200972963115d\" returns successfully" May 13 00:28:46.969343 containerd[1535]: time="2025-05-13T00:28:46.969287141Z" level=info msg="StartContainer for \"180348b374cd0b6787dcdf61a4e2ddacc6de4e38f7249239ca27fc6b8c17d73c\" returns successfully" May 13 00:28:46.995162 containerd[1535]: time="2025-05-13T00:28:46.991321847Z" level=info msg="StartContainer for \"ea3731c76c9855941c1e9916bfc37bc08a9f981e5694b7e6280d5fe96ae610b3\" returns successfully" May 13 00:28:47.095513 kubelet[2347]: W0513 00:28:47.095424 2347 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:47.095513 kubelet[2347]: E0513 00:28:47.095487 2347 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.98:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.98:6443: connect: connection refused May 13 00:28:47.176290 kubelet[2347]: E0513 00:28:47.176186 2347 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.98:6443: connect: connection refused" interval="1.6s" May 13 00:28:47.281313 kubelet[2347]: I0513 00:28:47.279584 2347 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:47.798160 kubelet[2347]: E0513 00:28:47.798101 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:47.800630 kubelet[2347]: E0513 00:28:47.799496 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:47.801876 kubelet[2347]: E0513 00:28:47.801843 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:48.695319 kubelet[2347]: I0513 00:28:48.695280 2347 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:28:48.767696 kubelet[2347]: I0513 00:28:48.767657 2347 apiserver.go:52] "Watching apiserver" May 13 00:28:48.773089 kubelet[2347]: I0513 00:28:48.773020 2347 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:28:48.812309 kubelet[2347]: E0513 00:28:48.812276 2347 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 13 00:28:48.812715 kubelet[2347]: E0513 00:28:48.812672 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:48.812771 kubelet[2347]: E0513 00:28:48.812753 2347 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 13 00:28:48.813101 kubelet[2347]: E0513 00:28:48.813085 2347 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:50.862567 systemd[1]: Reloading requested from client PID 2627 ('systemctl') (unit session-7.scope)... May 13 00:28:50.862583 systemd[1]: Reloading... May 13 00:28:50.919576 zram_generator::config[2666]: No configuration found. May 13 00:28:51.068863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:28:51.128750 systemd[1]: Reloading finished in 265 ms. May 13 00:28:51.158238 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:51.158474 kubelet[2347]: I0513 00:28:51.158260 2347 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:28:51.176334 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:28:51.176693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:51.187521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:28:51.276024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:28:51.279683 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:28:51.332181 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:51.332181 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:28:51.332181 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:28:51.332572 kubelet[2718]: I0513 00:28:51.332229 2718 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:28:51.336352 kubelet[2718]: I0513 00:28:51.336282 2718 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:28:51.336352 kubelet[2718]: I0513 00:28:51.336315 2718 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:28:51.336506 kubelet[2718]: I0513 00:28:51.336489 2718 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:28:51.337804 kubelet[2718]: I0513 00:28:51.337781 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 13 00:28:51.338974 kubelet[2718]: I0513 00:28:51.338943 2718 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:28:51.343578 kubelet[2718]: I0513 00:28:51.343555 2718 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:28:51.343945 kubelet[2718]: I0513 00:28:51.343909 2718 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:28:51.344084 kubelet[2718]: I0513 00:28:51.343937 2718 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:28:51.344084 kubelet[2718]: I0513 00:28:51.344084 2718 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:28:51.344173 kubelet[2718]: I0513 00:28:51.344092 2718 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:28:51.344173 kubelet[2718]: I0513 00:28:51.344122 2718 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:51.344222 kubelet[2718]: I0513 00:28:51.344214 2718 kubelet.go:400] "Attempting to sync node with API server" May 13 00:28:51.344247 kubelet[2718]: I0513 00:28:51.344225 2718 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:28:51.344270 kubelet[2718]: I0513 00:28:51.344248 2718 kubelet.go:312] "Adding apiserver pod source" May 13 00:28:51.344270 kubelet[2718]: I0513 00:28:51.344261 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:28:51.345681 kubelet[2718]: I0513 00:28:51.344718 2718 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:28:51.345681 kubelet[2718]: I0513 00:28:51.344873 2718 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:28:51.345681 kubelet[2718]: I0513 00:28:51.345247 2718 server.go:1264] "Started kubelet" May 13 00:28:51.345681 kubelet[2718]: I0513 00:28:51.345504 2718 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:28:51.345681 kubelet[2718]: I0513 00:28:51.345550 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:28:51.346522 kubelet[2718]: I0513 00:28:51.346494 2718 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:28:51.351373 kubelet[2718]: I0513 00:28:51.347886 2718 server.go:455] "Adding debug handlers to kubelet server" May 13 00:28:51.354936 kubelet[2718]: I0513 00:28:51.354911 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:28:51.358250 kubelet[2718]: E0513 00:28:51.357019 2718 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 13 00:28:51.358250 kubelet[2718]: I0513 00:28:51.357074 2718 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:28:51.358250 kubelet[2718]: I0513 00:28:51.357557 2718 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:28:51.358250 kubelet[2718]: I0513 00:28:51.357747 2718 reconciler.go:26] "Reconciler: start to sync state" May 13 00:28:51.358872 kubelet[2718]: I0513 00:28:51.358843 2718 factory.go:221] Registration of the systemd container factory successfully May 13 00:28:51.359352 kubelet[2718]: I0513 00:28:51.358951 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:28:51.359753 kubelet[2718]: E0513 00:28:51.359332 2718 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:28:51.362920 kubelet[2718]: I0513 00:28:51.362898 2718 factory.go:221] Registration of the containerd container factory successfully May 13 00:28:51.370592 kubelet[2718]: I0513 00:28:51.370469 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:28:51.371427 kubelet[2718]: I0513 00:28:51.371393 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:28:51.371476 kubelet[2718]: I0513 00:28:51.371441 2718 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:28:51.371476 kubelet[2718]: I0513 00:28:51.371458 2718 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:28:51.371533 kubelet[2718]: E0513 00:28:51.371497 2718 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:28:51.403878 kubelet[2718]: I0513 00:28:51.403791 2718 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:28:51.403878 kubelet[2718]: I0513 00:28:51.403808 2718 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:28:51.403878 kubelet[2718]: I0513 00:28:51.403827 2718 state_mem.go:36] "Initialized new in-memory state store" May 13 00:28:51.404014 kubelet[2718]: I0513 00:28:51.403963 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 13 00:28:51.404014 kubelet[2718]: I0513 00:28:51.403981 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 13 00:28:51.404014 kubelet[2718]: I0513 00:28:51.403999 2718 policy_none.go:49] "None policy: Start" May 13 00:28:51.406056 kubelet[2718]: I0513 00:28:51.406022 2718 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:28:51.406056 kubelet[2718]: I0513 00:28:51.406057 2718 state_mem.go:35] "Initializing new in-memory state store" May 13 00:28:51.406213 kubelet[2718]: I0513 00:28:51.406191 2718 state_mem.go:75] "Updated machine memory state" May 13 00:28:51.407277 kubelet[2718]: I0513 00:28:51.407252 2718 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:28:51.407937 kubelet[2718]: I0513 00:28:51.407488 2718 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:28:51.407937 kubelet[2718]: I0513 00:28:51.407586 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:28:51.462646 kubelet[2718]: I0513 00:28:51.462620 2718 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 13 00:28:51.469010 kubelet[2718]: I0513 00:28:51.468974 2718 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 13 00:28:51.469103 kubelet[2718]: I0513 00:28:51.469058 2718 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 13 00:28:51.472090 kubelet[2718]: I0513 00:28:51.472018 2718 topology_manager.go:215] "Topology Admit Handler" podUID="87055c1873472f16a64cf579cafd6eaa" podNamespace="kube-system" podName="kube-apiserver-localhost" May 13 00:28:51.472283 kubelet[2718]: I0513 00:28:51.472266 2718 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 13 00:28:51.472426 kubelet[2718]: I0513 00:28:51.472411 2718 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 13 00:28:51.659359 kubelet[2718]: I0513 00:28:51.659282 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87055c1873472f16a64cf579cafd6eaa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"87055c1873472f16a64cf579cafd6eaa\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:51.659359 kubelet[2718]: I0513 00:28:51.659333 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:51.659359 kubelet[2718]: I0513 00:28:51.659356 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:51.659538 kubelet[2718]: I0513 00:28:51.659400 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:51.659538 kubelet[2718]: I0513 00:28:51.659443 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:51.659538 kubelet[2718]: I0513 00:28:51.659472 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 13 00:28:51.659538 kubelet[2718]: I0513 00:28:51.659499 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87055c1873472f16a64cf579cafd6eaa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"87055c1873472f16a64cf579cafd6eaa\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:51.659538 kubelet[2718]: I0513 00:28:51.659524 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87055c1873472f16a64cf579cafd6eaa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"87055c1873472f16a64cf579cafd6eaa\") " pod="kube-system/kube-apiserver-localhost" May 13 00:28:51.659648 kubelet[2718]: I0513 00:28:51.659545 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 13 00:28:51.780420 kubelet[2718]: E0513 00:28:51.780299 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:51.780420 kubelet[2718]: E0513 00:28:51.780364 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:51.780420 kubelet[2718]: E0513 00:28:51.780376 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:51.864709 sudo[2751]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 13 00:28:51.864993 sudo[2751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 13 00:28:52.293019 sudo[2751]: pam_unix(sudo:session): session closed for user root May 13 00:28:52.344546 kubelet[2718]: I0513 00:28:52.344511 2718 apiserver.go:52] "Watching apiserver" May 13 00:28:52.358417 kubelet[2718]: I0513 00:28:52.358375 2718 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:28:52.393053 kubelet[2718]: E0513 00:28:52.388772 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:52.394933 kubelet[2718]: E0513 00:28:52.394394 2718 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 13 00:28:52.395540 kubelet[2718]: E0513 00:28:52.395491 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:52.397051 kubelet[2718]: E0513 00:28:52.397032 2718 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 13 00:28:52.397776 kubelet[2718]: E0513 00:28:52.397752 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:52.434272 kubelet[2718]: I0513 00:28:52.434143 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.434129307 podStartE2EDuration="1.434129307s" podCreationTimestamp="2025-05-13 00:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:52.432178151 +0000 UTC m=+1.149310966" watchObservedRunningTime="2025-05-13 00:28:52.434129307 +0000 UTC m=+1.151262122" May 13 00:28:52.441501 kubelet[2718]: I0513 00:28:52.441427 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4414105099999999 podStartE2EDuration="1.44141051s" podCreationTimestamp="2025-05-13 00:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:52.44141015 +0000 UTC m=+1.158542925" watchObservedRunningTime="2025-05-13 00:28:52.44141051 +0000 UTC m=+1.158543285" May 13 00:28:52.459507 kubelet[2718]: I0513 00:28:52.459434 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.459417666 podStartE2EDuration="1.459417666s" podCreationTimestamp="2025-05-13 00:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:28:52.449146953 +0000 UTC m=+1.166279768" watchObservedRunningTime="2025-05-13 00:28:52.459417666 +0000 UTC m=+1.176550481" May 13 00:28:53.387721 kubelet[2718]: E0513 00:28:53.387684 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:53.388066 kubelet[2718]: E0513 00:28:53.387937 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:53.388066 kubelet[2718]: E0513 00:28:53.387962 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:54.039959 sudo[1747]: pam_unix(sudo:session): session closed for user root May 13 00:28:54.042558 sshd[1740]: pam_unix(sshd:session): session closed for user core May 13 00:28:54.046059 systemd[1]: sshd@6-10.0.0.98:22-10.0.0.1:51904.service: Deactivated successfully. May 13 00:28:54.048056 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:28:54.048107 systemd-logind[1515]: Session 7 logged out. Waiting for processes to exit. May 13 00:28:54.050225 systemd-logind[1515]: Removed session 7. May 13 00:28:54.389295 kubelet[2718]: E0513 00:28:54.389194 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:55.391860 kubelet[2718]: E0513 00:28:55.391826 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:28:55.780664 kubelet[2718]: E0513 00:28:55.780634 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:03.208944 kubelet[2718]: E0513 00:29:03.207629 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:04.569339 kubelet[2718]: I0513 00:29:04.569210 2718 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 13 00:29:04.569714 containerd[1535]: time="2025-05-13T00:29:04.569504406Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:29:04.570612 kubelet[2718]: I0513 00:29:04.570060 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 13 00:29:04.869515 kubelet[2718]: E0513 00:29:04.869409 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:05.128827 update_engine[1518]: I20250513 00:29:05.128684 1518 update_attempter.cc:509] Updating boot flags... May 13 00:29:05.154241 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2802) May 13 00:29:05.188245 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2800) May 13 00:29:05.205345 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2800) May 13 00:29:05.405083 kubelet[2718]: E0513 00:29:05.404977 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:05.698486 kubelet[2718]: I0513 00:29:05.697187 2718 topology_manager.go:215] "Topology Admit Handler" podUID="14ebc8c6-f9c6-4fa1-9f28-915be5214360" podNamespace="kube-system" podName="cilium-operator-599987898-6z44t" May 13 00:29:05.751435 kubelet[2718]: I0513 00:29:05.750365 2718 topology_manager.go:215] "Topology Admit Handler" podUID="3ceb543b-c002-48b1-bea7-d288af6e679b" podNamespace="kube-system" podName="kube-proxy-l9m5f" May 13 00:29:05.753956 kubelet[2718]: I0513 00:29:05.753915 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14ebc8c6-f9c6-4fa1-9f28-915be5214360-cilium-config-path\") pod \"cilium-operator-599987898-6z44t\" (UID: \"14ebc8c6-f9c6-4fa1-9f28-915be5214360\") " pod="kube-system/cilium-operator-599987898-6z44t" May 13 00:29:05.753956 kubelet[2718]: I0513 00:29:05.753951 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqq9\" (UniqueName: \"kubernetes.io/projected/14ebc8c6-f9c6-4fa1-9f28-915be5214360-kube-api-access-9zqq9\") pod \"cilium-operator-599987898-6z44t\" (UID: \"14ebc8c6-f9c6-4fa1-9f28-915be5214360\") " pod="kube-system/cilium-operator-599987898-6z44t" May 13 00:29:05.757932 kubelet[2718]: I0513 00:29:05.757895 2718 topology_manager.go:215] "Topology Admit Handler" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" podNamespace="kube-system" podName="cilium-sq9rn" May 13 00:29:05.792456 kubelet[2718]: E0513 00:29:05.792430 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:05.854963 kubelet[2718]: I0513 00:29:05.854931 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ceb543b-c002-48b1-bea7-d288af6e679b-kube-proxy\") pod \"kube-proxy-l9m5f\" (UID: \"3ceb543b-c002-48b1-bea7-d288af6e679b\") " pod="kube-system/kube-proxy-l9m5f" May 13 00:29:05.855181 kubelet[2718]: I0513 00:29:05.855163 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hostproc\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855275 kubelet[2718]: I0513 00:29:05.855261 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-lib-modules\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855396 kubelet[2718]: I0513 00:29:05.855383 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-clustermesh-secrets\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855478 kubelet[2718]: I0513 00:29:05.855466 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-config-path\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855602 kubelet[2718]: I0513 00:29:05.855558 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hpl4\" (UniqueName: \"kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-kube-api-access-6hpl4\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855638 kubelet[2718]: I0513 00:29:05.855612 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ceb543b-c002-48b1-bea7-d288af6e679b-xtables-lock\") pod \"kube-proxy-l9m5f\" (UID: \"3ceb543b-c002-48b1-bea7-d288af6e679b\") " pod="kube-system/kube-proxy-l9m5f" May 13 00:29:05.855665 kubelet[2718]: I0513 00:29:05.855641 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gzh\" (UniqueName: \"kubernetes.io/projected/3ceb543b-c002-48b1-bea7-d288af6e679b-kube-api-access-g6gzh\") pod \"kube-proxy-l9m5f\" (UID: \"3ceb543b-c002-48b1-bea7-d288af6e679b\") " pod="kube-system/kube-proxy-l9m5f" May 13 00:29:05.855698 kubelet[2718]: I0513 00:29:05.855662 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-etc-cni-netd\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855698 kubelet[2718]: I0513 00:29:05.855693 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-xtables-lock\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855746 kubelet[2718]: I0513 00:29:05.855710 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-net\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855746 kubelet[2718]: I0513 00:29:05.855730 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hubble-tls\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855792 kubelet[2718]: I0513 00:29:05.855756 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ceb543b-c002-48b1-bea7-d288af6e679b-lib-modules\") pod \"kube-proxy-l9m5f\" (UID: \"3ceb543b-c002-48b1-bea7-d288af6e679b\") " pod="kube-system/kube-proxy-l9m5f" May 13 00:29:05.855792 kubelet[2718]: I0513 00:29:05.855776 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-run\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855835 kubelet[2718]: I0513 00:29:05.855791 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-cgroup\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855835 kubelet[2718]: I0513 00:29:05.855820 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cni-path\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855918 kubelet[2718]: I0513 00:29:05.855902 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-bpf-maps\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:05.855944 kubelet[2718]: I0513 00:29:05.855925 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-kernel\") pod \"cilium-sq9rn\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " pod="kube-system/cilium-sq9rn" May 13 00:29:06.001055 kubelet[2718]: E0513 00:29:06.000920 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.002322 containerd[1535]: time="2025-05-13T00:29:06.001994138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6z44t,Uid:14ebc8c6-f9c6-4fa1-9f28-915be5214360,Namespace:kube-system,Attempt:0,}" May 13 00:29:06.023294 containerd[1535]: time="2025-05-13T00:29:06.023195052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:06.023294 containerd[1535]: time="2025-05-13T00:29:06.023254858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:06.023294 containerd[1535]: time="2025-05-13T00:29:06.023273060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.023536 containerd[1535]: time="2025-05-13T00:29:06.023382151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.054510 kubelet[2718]: E0513 00:29:06.054469 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.057778 containerd[1535]: time="2025-05-13T00:29:06.057435468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l9m5f,Uid:3ceb543b-c002-48b1-bea7-d288af6e679b,Namespace:kube-system,Attempt:0,}" May 13 00:29:06.068603 kubelet[2718]: E0513 00:29:06.068571 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.070175 containerd[1535]: time="2025-05-13T00:29:06.069529114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sq9rn,Uid:c4bc0836-86e1-4a9c-b262-c783d9fbb9c7,Namespace:kube-system,Attempt:0,}" May 13 00:29:06.071399 containerd[1535]: time="2025-05-13T00:29:06.071364897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-6z44t,Uid:14ebc8c6-f9c6-4fa1-9f28-915be5214360,Namespace:kube-system,Attempt:0,} returns sandbox id \"99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3\"" May 13 00:29:06.077332 kubelet[2718]: E0513 00:29:06.077286 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.083009 containerd[1535]: time="2025-05-13T00:29:06.082073645Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 13 00:29:06.094399 containerd[1535]: time="2025-05-13T00:29:06.093901745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:06.094399 containerd[1535]: time="2025-05-13T00:29:06.093981633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:06.094399 containerd[1535]: time="2025-05-13T00:29:06.093996834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.094869 containerd[1535]: time="2025-05-13T00:29:06.094689703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.097926 containerd[1535]: time="2025-05-13T00:29:06.097862460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:06.097926 containerd[1535]: time="2025-05-13T00:29:06.097904904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:06.098326 containerd[1535]: time="2025-05-13T00:29:06.097916545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.098326 containerd[1535]: time="2025-05-13T00:29:06.097991873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:06.133358 containerd[1535]: time="2025-05-13T00:29:06.133287193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l9m5f,Uid:3ceb543b-c002-48b1-bea7-d288af6e679b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9709061d54609adf7319d55585546b6e09266640daf07b70748030623f918ad7\"" May 13 00:29:06.133651 containerd[1535]: time="2025-05-13T00:29:06.133332517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sq9rn,Uid:c4bc0836-86e1-4a9c-b262-c783d9fbb9c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\"" May 13 00:29:06.134920 kubelet[2718]: E0513 00:29:06.134889 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.135340 kubelet[2718]: E0513 00:29:06.134890 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.142547 containerd[1535]: time="2025-05-13T00:29:06.142497272Z" level=info msg="CreateContainer within sandbox \"9709061d54609adf7319d55585546b6e09266640daf07b70748030623f918ad7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:29:06.170471 containerd[1535]: time="2025-05-13T00:29:06.170348329Z" level=info msg="CreateContainer within sandbox \"9709061d54609adf7319d55585546b6e09266640daf07b70748030623f918ad7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1939c9d464f0d77e07bc34eedc85d7273d428301567d202938cba67315378ffb\"" May 13 00:29:06.171180 containerd[1535]: time="2025-05-13T00:29:06.171146289Z" level=info msg="StartContainer for \"1939c9d464f0d77e07bc34eedc85d7273d428301567d202938cba67315378ffb\"" May 13 00:29:06.240847 containerd[1535]: time="2025-05-13T00:29:06.240791675Z" level=info msg="StartContainer for \"1939c9d464f0d77e07bc34eedc85d7273d428301567d202938cba67315378ffb\" returns successfully" May 13 00:29:06.413987 kubelet[2718]: E0513 00:29:06.413948 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:06.432463 kubelet[2718]: I0513 00:29:06.432403 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l9m5f" podStartSLOduration=1.432388585 podStartE2EDuration="1.432388585s" podCreationTimestamp="2025-05-13 00:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:06.432099716 +0000 UTC m=+15.149232531" watchObservedRunningTime="2025-05-13 00:29:06.432388585 +0000 UTC m=+15.149521400" May 13 00:29:07.262868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774394502.mount: Deactivated successfully. May 13 00:29:07.982982 containerd[1535]: time="2025-05-13T00:29:07.982934126Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:07.985158 containerd[1535]: time="2025-05-13T00:29:07.985134134Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 13 00:29:07.986160 containerd[1535]: time="2025-05-13T00:29:07.986133509Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:07.987326 containerd[1535]: time="2025-05-13T00:29:07.987284578Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.905168089s" May 13 00:29:07.987397 containerd[1535]: time="2025-05-13T00:29:07.987334623Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 13 00:29:07.993075 containerd[1535]: time="2025-05-13T00:29:07.992742216Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 13 00:29:08.000463 containerd[1535]: time="2025-05-13T00:29:08.000430986Z" level=info msg="CreateContainer within sandbox \"99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 13 00:29:08.014922 containerd[1535]: time="2025-05-13T00:29:08.014863334Z" level=info msg="CreateContainer within sandbox \"99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\"" May 13 00:29:08.016324 containerd[1535]: time="2025-05-13T00:29:08.015289613Z" level=info msg="StartContainer for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\"" May 13 00:29:08.061402 containerd[1535]: time="2025-05-13T00:29:08.061361656Z" level=info msg="StartContainer for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" returns successfully" May 13 00:29:08.433345 kubelet[2718]: E0513 00:29:08.432742 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:08.477007 kubelet[2718]: I0513 00:29:08.476506 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-6z44t" podStartSLOduration=1.563580594 podStartE2EDuration="3.474710372s" podCreationTimestamp="2025-05-13 00:29:05 +0000 UTC" firstStartedPulling="2025-05-13 00:29:06.081431381 +0000 UTC m=+14.798564196" lastFinishedPulling="2025-05-13 00:29:07.992561159 +0000 UTC m=+16.709693974" observedRunningTime="2025-05-13 00:29:08.47469385 +0000 UTC m=+17.191826665" watchObservedRunningTime="2025-05-13 00:29:08.474710372 +0000 UTC m=+17.191843187" May 13 00:29:09.434359 kubelet[2718]: E0513 00:29:09.433594 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:12.227786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562469645.mount: Deactivated successfully. May 13 00:29:13.466776 containerd[1535]: time="2025-05-13T00:29:13.466728277Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:13.467348 containerd[1535]: time="2025-05-13T00:29:13.467305519Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 13 00:29:13.468020 containerd[1535]: time="2025-05-13T00:29:13.467996808Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:29:13.469923 containerd[1535]: time="2025-05-13T00:29:13.469548159Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.4767737s" May 13 00:29:13.469923 containerd[1535]: time="2025-05-13T00:29:13.469583802Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 13 00:29:13.472229 containerd[1535]: time="2025-05-13T00:29:13.472109383Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:29:13.480610 containerd[1535]: time="2025-05-13T00:29:13.480572949Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\"" May 13 00:29:13.480851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52140112.mount: Deactivated successfully. May 13 00:29:13.482057 containerd[1535]: time="2025-05-13T00:29:13.481945327Z" level=info msg="StartContainer for \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\"" May 13 00:29:13.599629 containerd[1535]: time="2025-05-13T00:29:13.599582352Z" level=info msg="StartContainer for \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\" returns successfully" May 13 00:29:13.636705 containerd[1535]: time="2025-05-13T00:29:13.634382325Z" level=info msg="shim disconnected" id=1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5 namespace=k8s.io May 13 00:29:13.636705 containerd[1535]: time="2025-05-13T00:29:13.636697210Z" level=warning msg="cleaning up after shim disconnected" id=1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5 namespace=k8s.io May 13 00:29:13.636705 containerd[1535]: time="2025-05-13T00:29:13.636712772Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:14.443036 kubelet[2718]: E0513 00:29:14.443007 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:14.446760 containerd[1535]: time="2025-05-13T00:29:14.446275802Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:29:14.462898 containerd[1535]: time="2025-05-13T00:29:14.462858939Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\"" May 13 00:29:14.463903 containerd[1535]: time="2025-05-13T00:29:14.463616071Z" level=info msg="StartContainer for \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\"" May 13 00:29:14.481186 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5-rootfs.mount: Deactivated successfully. May 13 00:29:14.510112 containerd[1535]: time="2025-05-13T00:29:14.510009451Z" level=info msg="StartContainer for \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\" returns successfully" May 13 00:29:14.538389 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:29:14.542250 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:29:14.542328 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 13 00:29:14.549601 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:29:14.560379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:29:14.564570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700-rootfs.mount: Deactivated successfully. May 13 00:29:14.570526 containerd[1535]: time="2025-05-13T00:29:14.570291303Z" level=info msg="shim disconnected" id=cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700 namespace=k8s.io May 13 00:29:14.570526 containerd[1535]: time="2025-05-13T00:29:14.570372188Z" level=warning msg="cleaning up after shim disconnected" id=cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700 namespace=k8s.io May 13 00:29:14.570526 containerd[1535]: time="2025-05-13T00:29:14.570385349Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:15.446325 kubelet[2718]: E0513 00:29:15.446117 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:15.449446 containerd[1535]: time="2025-05-13T00:29:15.449402396Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:29:15.479094 containerd[1535]: time="2025-05-13T00:29:15.479038142Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\"" May 13 00:29:15.479866 containerd[1535]: time="2025-05-13T00:29:15.479838034Z" level=info msg="StartContainer for \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\"" May 13 00:29:15.534518 containerd[1535]: time="2025-05-13T00:29:15.534473462Z" level=info msg="StartContainer for \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\" returns successfully" May 13 00:29:15.556347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d-rootfs.mount: Deactivated successfully. May 13 00:29:15.561044 containerd[1535]: time="2025-05-13T00:29:15.560987042Z" level=info msg="shim disconnected" id=da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d namespace=k8s.io May 13 00:29:15.561044 containerd[1535]: time="2025-05-13T00:29:15.561042926Z" level=warning msg="cleaning up after shim disconnected" id=da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d namespace=k8s.io May 13 00:29:15.561189 containerd[1535]: time="2025-05-13T00:29:15.561054007Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:16.449416 kubelet[2718]: E0513 00:29:16.449220 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:16.451460 containerd[1535]: time="2025-05-13T00:29:16.451391889Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:29:16.476344 containerd[1535]: time="2025-05-13T00:29:16.476252494Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\"" May 13 00:29:16.477338 containerd[1535]: time="2025-05-13T00:29:16.476833250Z" level=info msg="StartContainer for \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\"" May 13 00:29:16.522730 containerd[1535]: time="2025-05-13T00:29:16.522573410Z" level=info msg="StartContainer for \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\" returns successfully" May 13 00:29:16.536057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d-rootfs.mount: Deactivated successfully. May 13 00:29:16.541488 containerd[1535]: time="2025-05-13T00:29:16.541426317Z" level=info msg="shim disconnected" id=bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d namespace=k8s.io May 13 00:29:16.541488 containerd[1535]: time="2025-05-13T00:29:16.541492201Z" level=warning msg="cleaning up after shim disconnected" id=bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d namespace=k8s.io May 13 00:29:16.541849 containerd[1535]: time="2025-05-13T00:29:16.541503521Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:29:17.453199 kubelet[2718]: E0513 00:29:17.453166 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:17.457055 containerd[1535]: time="2025-05-13T00:29:17.457009243Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:29:17.473102 containerd[1535]: time="2025-05-13T00:29:17.472012510Z" level=info msg="CreateContainer within sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\"" May 13 00:29:17.473102 containerd[1535]: time="2025-05-13T00:29:17.472909004Z" level=info msg="StartContainer for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\"" May 13 00:29:17.528812 containerd[1535]: time="2025-05-13T00:29:17.528768419Z" level=info msg="StartContainer for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" returns successfully" May 13 00:29:17.644936 kubelet[2718]: I0513 00:29:17.644893 2718 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:29:17.667214 kubelet[2718]: I0513 00:29:17.667158 2718 topology_manager.go:215] "Topology Admit Handler" podUID="7b18acc6-9c72-4448-b01f-3331b5b8ac2b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z2mpk" May 13 00:29:17.667472 kubelet[2718]: I0513 00:29:17.667372 2718 topology_manager.go:215] "Topology Admit Handler" podUID="71e5c99f-2fe6-4779-882b-b02a946ed8cd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tbtkx" May 13 00:29:17.838014 kubelet[2718]: I0513 00:29:17.837892 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71e5c99f-2fe6-4779-882b-b02a946ed8cd-config-volume\") pod \"coredns-7db6d8ff4d-tbtkx\" (UID: \"71e5c99f-2fe6-4779-882b-b02a946ed8cd\") " pod="kube-system/coredns-7db6d8ff4d-tbtkx" May 13 00:29:17.838014 kubelet[2718]: I0513 00:29:17.837945 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b18acc6-9c72-4448-b01f-3331b5b8ac2b-config-volume\") pod \"coredns-7db6d8ff4d-z2mpk\" (UID: \"7b18acc6-9c72-4448-b01f-3331b5b8ac2b\") " pod="kube-system/coredns-7db6d8ff4d-z2mpk" May 13 00:29:17.838149 kubelet[2718]: I0513 00:29:17.838035 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khs7n\" (UniqueName: \"kubernetes.io/projected/71e5c99f-2fe6-4779-882b-b02a946ed8cd-kube-api-access-khs7n\") pod \"coredns-7db6d8ff4d-tbtkx\" (UID: \"71e5c99f-2fe6-4779-882b-b02a946ed8cd\") " pod="kube-system/coredns-7db6d8ff4d-tbtkx" May 13 00:29:17.838149 kubelet[2718]: I0513 00:29:17.838065 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmdp\" (UniqueName: \"kubernetes.io/projected/7b18acc6-9c72-4448-b01f-3331b5b8ac2b-kube-api-access-lcmdp\") pod \"coredns-7db6d8ff4d-z2mpk\" (UID: \"7b18acc6-9c72-4448-b01f-3331b5b8ac2b\") " pod="kube-system/coredns-7db6d8ff4d-z2mpk" May 13 00:29:17.971817 kubelet[2718]: E0513 00:29:17.971683 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:17.974180 kubelet[2718]: E0513 00:29:17.973878 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:17.974579 containerd[1535]: time="2025-05-13T00:29:17.974533711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z2mpk,Uid:7b18acc6-9c72-4448-b01f-3331b5b8ac2b,Namespace:kube-system,Attempt:0,}" May 13 00:29:17.975404 containerd[1535]: time="2025-05-13T00:29:17.974793566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tbtkx,Uid:71e5c99f-2fe6-4779-882b-b02a946ed8cd,Namespace:kube-system,Attempt:0,}" May 13 00:29:18.457659 kubelet[2718]: E0513 00:29:18.457630 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:18.472533 kubelet[2718]: I0513 00:29:18.472403 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sq9rn" podStartSLOduration=6.137853582 podStartE2EDuration="13.472386387s" podCreationTimestamp="2025-05-13 00:29:05 +0000 UTC" firstStartedPulling="2025-05-13 00:29:06.136057829 +0000 UTC m=+14.853190644" lastFinishedPulling="2025-05-13 00:29:13.470590674 +0000 UTC m=+22.187723449" observedRunningTime="2025-05-13 00:29:18.471936881 +0000 UTC m=+27.189069696" watchObservedRunningTime="2025-05-13 00:29:18.472386387 +0000 UTC m=+27.189519202" May 13 00:29:19.459251 kubelet[2718]: E0513 00:29:19.459082 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:19.762253 systemd-networkd[1228]: cilium_host: Link UP May 13 00:29:19.762399 systemd-networkd[1228]: cilium_net: Link UP May 13 00:29:19.763746 systemd-networkd[1228]: cilium_net: Gained carrier May 13 00:29:19.763985 systemd-networkd[1228]: cilium_host: Gained carrier May 13 00:29:19.764535 systemd-networkd[1228]: cilium_net: Gained IPv6LL May 13 00:29:19.765704 systemd-networkd[1228]: cilium_host: Gained IPv6LL May 13 00:29:19.845287 systemd-networkd[1228]: cilium_vxlan: Link UP May 13 00:29:19.845296 systemd-networkd[1228]: cilium_vxlan: Gained carrier May 13 00:29:20.021605 systemd[1]: Started sshd@7-10.0.0.98:22-10.0.0.1:56940.service - OpenSSH per-connection server daemon (10.0.0.1:56940). May 13 00:29:20.064518 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 56940 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:20.066107 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:20.074376 systemd-logind[1515]: New session 8 of user core. May 13 00:29:20.080614 systemd[1]: Started session-8.scope - Session 8 of User core. May 13 00:29:20.146329 kernel: NET: Registered PF_ALG protocol family May 13 00:29:20.222516 sshd[3662]: pam_unix(sshd:session): session closed for user core May 13 00:29:20.226727 systemd[1]: sshd@7-10.0.0.98:22-10.0.0.1:56940.service: Deactivated successfully. May 13 00:29:20.228984 systemd-logind[1515]: Session 8 logged out. Waiting for processes to exit. May 13 00:29:20.230313 systemd[1]: session-8.scope: Deactivated successfully. May 13 00:29:20.232789 systemd-logind[1515]: Removed session 8. May 13 00:29:20.460252 kubelet[2718]: E0513 00:29:20.460206 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:20.748150 systemd-networkd[1228]: lxc_health: Link UP May 13 00:29:20.757613 systemd-networkd[1228]: lxc_health: Gained carrier May 13 00:29:21.176649 systemd-networkd[1228]: lxc6ee58041a092: Link UP May 13 00:29:21.194406 kernel: eth0: renamed from tmp598bb May 13 00:29:21.204532 systemd-networkd[1228]: lxc6ee58041a092: Gained carrier May 13 00:29:21.235237 systemd-networkd[1228]: lxcee078a84babb: Link UP May 13 00:29:21.242333 kernel: eth0: renamed from tmp07288 May 13 00:29:21.249213 systemd-networkd[1228]: lxcee078a84babb: Gained carrier May 13 00:29:21.816198 kubelet[2718]: E0513 00:29:21.816145 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:21.853459 systemd-networkd[1228]: cilium_vxlan: Gained IPv6LL May 13 00:29:22.173471 systemd-networkd[1228]: lxc_health: Gained IPv6LL May 13 00:29:22.365506 systemd-networkd[1228]: lxc6ee58041a092: Gained IPv6LL May 13 00:29:22.464229 kubelet[2718]: E0513 00:29:22.464002 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:22.749496 systemd-networkd[1228]: lxcee078a84babb: Gained IPv6LL May 13 00:29:23.466006 kubelet[2718]: E0513 00:29:23.465924 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:24.835554 containerd[1535]: time="2025-05-13T00:29:24.835022469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:24.835554 containerd[1535]: time="2025-05-13T00:29:24.835091112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:24.835554 containerd[1535]: time="2025-05-13T00:29:24.835107072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:24.835554 containerd[1535]: time="2025-05-13T00:29:24.835204717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:24.843780 containerd[1535]: time="2025-05-13T00:29:24.841764863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:29:24.843780 containerd[1535]: time="2025-05-13T00:29:24.841856067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:29:24.843780 containerd[1535]: time="2025-05-13T00:29:24.841869908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:24.843780 containerd[1535]: time="2025-05-13T00:29:24.842001634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:29:24.867175 systemd-resolved[1431]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:24.870556 systemd-resolved[1431]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:29:24.888406 containerd[1535]: time="2025-05-13T00:29:24.888362914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tbtkx,Uid:71e5c99f-2fe6-4779-882b-b02a946ed8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"072884f36b0a5571998b20a49cc4508073c66077e47cfe711d42fce29bd0d32a\"" May 13 00:29:24.889353 kubelet[2718]: E0513 00:29:24.889226 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:24.893754 containerd[1535]: time="2025-05-13T00:29:24.893702682Z" level=info msg="CreateContainer within sandbox \"072884f36b0a5571998b20a49cc4508073c66077e47cfe711d42fce29bd0d32a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:29:24.897029 containerd[1535]: time="2025-05-13T00:29:24.896930593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z2mpk,Uid:7b18acc6-9c72-4448-b01f-3331b5b8ac2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"598bbca64b76b1fa8c44f5083eca376ff9e2e4340a58b51679ddb0f3af541af3\"" May 13 00:29:24.897680 kubelet[2718]: E0513 00:29:24.897655 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:24.900316 containerd[1535]: time="2025-05-13T00:29:24.900161343Z" level=info msg="CreateContainer within sandbox \"598bbca64b76b1fa8c44f5083eca376ff9e2e4340a58b51679ddb0f3af541af3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 13 00:29:24.915065 containerd[1535]: time="2025-05-13T00:29:24.914898630Z" level=info msg="CreateContainer within sandbox \"598bbca64b76b1fa8c44f5083eca376ff9e2e4340a58b51679ddb0f3af541af3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee017f3b40d4ecc366ea658d37481a0a90bac98fd351711e4a2857a91c3008e7\"" May 13 00:29:24.915267 containerd[1535]: time="2025-05-13T00:29:24.915232166Z" level=info msg="CreateContainer within sandbox \"072884f36b0a5571998b20a49cc4508073c66077e47cfe711d42fce29bd0d32a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e70817bbcf9a813e9c0b00c650f6dee6c477cdf40907df161fccf88f11b9fdf\"" May 13 00:29:24.917177 containerd[1535]: time="2025-05-13T00:29:24.917136494Z" level=info msg="StartContainer for \"9e70817bbcf9a813e9c0b00c650f6dee6c477cdf40907df161fccf88f11b9fdf\"" May 13 00:29:24.917177 containerd[1535]: time="2025-05-13T00:29:24.917174296Z" level=info msg="StartContainer for \"ee017f3b40d4ecc366ea658d37481a0a90bac98fd351711e4a2857a91c3008e7\"" May 13 00:29:24.975847 containerd[1535]: time="2025-05-13T00:29:24.972595518Z" level=info msg="StartContainer for \"9e70817bbcf9a813e9c0b00c650f6dee6c477cdf40907df161fccf88f11b9fdf\" returns successfully" May 13 00:29:24.975847 containerd[1535]: time="2025-05-13T00:29:24.972691163Z" level=info msg="StartContainer for \"ee017f3b40d4ecc366ea658d37481a0a90bac98fd351711e4a2857a91c3008e7\" returns successfully" May 13 00:29:25.236564 systemd[1]: Started sshd@8-10.0.0.98:22-10.0.0.1:34618.service - OpenSSH per-connection server daemon (10.0.0.1:34618). May 13 00:29:25.276990 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 34618 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:25.278594 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:25.283127 systemd-logind[1515]: New session 9 of user core. May 13 00:29:25.294745 systemd[1]: Started session-9.scope - Session 9 of User core. May 13 00:29:25.415563 sshd[4130]: pam_unix(sshd:session): session closed for user core May 13 00:29:25.418964 systemd[1]: sshd@8-10.0.0.98:22-10.0.0.1:34618.service: Deactivated successfully. May 13 00:29:25.421409 systemd-logind[1515]: Session 9 logged out. Waiting for processes to exit. May 13 00:29:25.421593 systemd[1]: session-9.scope: Deactivated successfully. May 13 00:29:25.422523 systemd-logind[1515]: Removed session 9. May 13 00:29:25.470759 kubelet[2718]: E0513 00:29:25.470116 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:25.478286 kubelet[2718]: E0513 00:29:25.477561 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:25.484197 kubelet[2718]: I0513 00:29:25.483431 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tbtkx" podStartSLOduration=20.483414829 podStartE2EDuration="20.483414829s" podCreationTimestamp="2025-05-13 00:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:25.482288498 +0000 UTC m=+34.199421313" watchObservedRunningTime="2025-05-13 00:29:25.483414829 +0000 UTC m=+34.200547644" May 13 00:29:25.504277 kubelet[2718]: I0513 00:29:25.504129 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z2mpk" podStartSLOduration=20.504111882 podStartE2EDuration="20.504111882s" podCreationTimestamp="2025-05-13 00:29:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:29:25.503997197 +0000 UTC m=+34.221130012" watchObservedRunningTime="2025-05-13 00:29:25.504111882 +0000 UTC m=+34.221244657" May 13 00:29:26.479045 kubelet[2718]: E0513 00:29:26.479000 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:26.479451 kubelet[2718]: E0513 00:29:26.479081 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:27.481087 kubelet[2718]: E0513 00:29:27.481051 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:27.481518 kubelet[2718]: E0513 00:29:27.481077 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:29:30.426544 systemd[1]: Started sshd@9-10.0.0.98:22-10.0.0.1:34622.service - OpenSSH per-connection server daemon (10.0.0.1:34622). May 13 00:29:30.459559 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 34622 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:30.460799 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:30.464213 systemd-logind[1515]: New session 10 of user core. May 13 00:29:30.474565 systemd[1]: Started session-10.scope - Session 10 of User core. May 13 00:29:30.584496 sshd[4154]: pam_unix(sshd:session): session closed for user core May 13 00:29:30.587632 systemd[1]: sshd@9-10.0.0.98:22-10.0.0.1:34622.service: Deactivated successfully. May 13 00:29:30.589700 systemd-logind[1515]: Session 10 logged out. Waiting for processes to exit. May 13 00:29:30.590240 systemd[1]: session-10.scope: Deactivated successfully. May 13 00:29:30.591148 systemd-logind[1515]: Removed session 10. May 13 00:29:35.597675 systemd[1]: Started sshd@10-10.0.0.98:22-10.0.0.1:43940.service - OpenSSH per-connection server daemon (10.0.0.1:43940). May 13 00:29:35.632451 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 43940 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:35.633369 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:35.639769 systemd-logind[1515]: New session 11 of user core. May 13 00:29:35.650623 systemd[1]: Started session-11.scope - Session 11 of User core. May 13 00:29:35.762727 sshd[4170]: pam_unix(sshd:session): session closed for user core May 13 00:29:35.781613 systemd[1]: Started sshd@11-10.0.0.98:22-10.0.0.1:43944.service - OpenSSH per-connection server daemon (10.0.0.1:43944). May 13 00:29:35.782010 systemd[1]: sshd@10-10.0.0.98:22-10.0.0.1:43940.service: Deactivated successfully. May 13 00:29:35.785029 systemd-logind[1515]: Session 11 logged out. Waiting for processes to exit. May 13 00:29:35.785134 systemd[1]: session-11.scope: Deactivated successfully. May 13 00:29:35.788559 systemd-logind[1515]: Removed session 11. May 13 00:29:35.821313 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 43944 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:35.822791 sshd[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:35.827268 systemd-logind[1515]: New session 12 of user core. May 13 00:29:35.840627 systemd[1]: Started session-12.scope - Session 12 of User core. May 13 00:29:35.997201 sshd[4186]: pam_unix(sshd:session): session closed for user core May 13 00:29:36.011879 systemd[1]: Started sshd@12-10.0.0.98:22-10.0.0.1:43946.service - OpenSSH per-connection server daemon (10.0.0.1:43946). May 13 00:29:36.014365 systemd[1]: sshd@11-10.0.0.98:22-10.0.0.1:43944.service: Deactivated successfully. May 13 00:29:36.021066 systemd[1]: session-12.scope: Deactivated successfully. May 13 00:29:36.023737 systemd-logind[1515]: Session 12 logged out. Waiting for processes to exit. May 13 00:29:36.024972 systemd-logind[1515]: Removed session 12. May 13 00:29:36.050497 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 43946 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:36.051769 sshd[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:36.055985 systemd-logind[1515]: New session 13 of user core. May 13 00:29:36.066569 systemd[1]: Started session-13.scope - Session 13 of User core. May 13 00:29:36.176101 sshd[4200]: pam_unix(sshd:session): session closed for user core May 13 00:29:36.179698 systemd[1]: sshd@12-10.0.0.98:22-10.0.0.1:43946.service: Deactivated successfully. May 13 00:29:36.181970 systemd-logind[1515]: Session 13 logged out. Waiting for processes to exit. May 13 00:29:36.181971 systemd[1]: session-13.scope: Deactivated successfully. May 13 00:29:36.183126 systemd-logind[1515]: Removed session 13. May 13 00:29:41.193626 systemd[1]: Started sshd@13-10.0.0.98:22-10.0.0.1:43958.service - OpenSSH per-connection server daemon (10.0.0.1:43958). May 13 00:29:41.227842 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 43958 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:41.228991 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.232643 systemd-logind[1515]: New session 14 of user core. May 13 00:29:41.242507 systemd[1]: Started session-14.scope - Session 14 of User core. May 13 00:29:41.354149 sshd[4221]: pam_unix(sshd:session): session closed for user core May 13 00:29:41.362619 systemd[1]: Started sshd@14-10.0.0.98:22-10.0.0.1:43962.service - OpenSSH per-connection server daemon (10.0.0.1:43962). May 13 00:29:41.363012 systemd[1]: sshd@13-10.0.0.98:22-10.0.0.1:43958.service: Deactivated successfully. May 13 00:29:41.365664 systemd[1]: session-14.scope: Deactivated successfully. May 13 00:29:41.365704 systemd-logind[1515]: Session 14 logged out. Waiting for processes to exit. May 13 00:29:41.366792 systemd-logind[1515]: Removed session 14. May 13 00:29:41.395561 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 43962 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:41.396672 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.399996 systemd-logind[1515]: New session 15 of user core. May 13 00:29:41.408517 systemd[1]: Started session-15.scope - Session 15 of User core. May 13 00:29:41.605919 sshd[4234]: pam_unix(sshd:session): session closed for user core May 13 00:29:41.614573 systemd[1]: Started sshd@15-10.0.0.98:22-10.0.0.1:43966.service - OpenSSH per-connection server daemon (10.0.0.1:43966). May 13 00:29:41.614944 systemd[1]: sshd@14-10.0.0.98:22-10.0.0.1:43962.service: Deactivated successfully. May 13 00:29:41.617592 systemd[1]: session-15.scope: Deactivated successfully. May 13 00:29:41.617875 systemd-logind[1515]: Session 15 logged out. Waiting for processes to exit. May 13 00:29:41.620834 systemd-logind[1515]: Removed session 15. May 13 00:29:41.653836 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 43966 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:41.655283 sshd[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:41.659024 systemd-logind[1515]: New session 16 of user core. May 13 00:29:41.672614 systemd[1]: Started session-16.scope - Session 16 of User core. May 13 00:29:42.950338 sshd[4247]: pam_unix(sshd:session): session closed for user core May 13 00:29:42.960067 systemd[1]: Started sshd@16-10.0.0.98:22-10.0.0.1:43674.service - OpenSSH per-connection server daemon (10.0.0.1:43674). May 13 00:29:42.962427 systemd[1]: sshd@15-10.0.0.98:22-10.0.0.1:43966.service: Deactivated successfully. May 13 00:29:42.966951 systemd[1]: session-16.scope: Deactivated successfully. May 13 00:29:42.970830 systemd-logind[1515]: Session 16 logged out. Waiting for processes to exit. May 13 00:29:42.975449 systemd-logind[1515]: Removed session 16. May 13 00:29:43.001120 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 43674 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:43.002558 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:43.006975 systemd-logind[1515]: New session 17 of user core. May 13 00:29:43.013676 systemd[1]: Started session-17.scope - Session 17 of User core. May 13 00:29:43.234137 sshd[4268]: pam_unix(sshd:session): session closed for user core May 13 00:29:43.242815 systemd[1]: Started sshd@17-10.0.0.98:22-10.0.0.1:43682.service - OpenSSH per-connection server daemon (10.0.0.1:43682). May 13 00:29:43.243811 systemd[1]: sshd@16-10.0.0.98:22-10.0.0.1:43674.service: Deactivated successfully. May 13 00:29:43.246952 systemd[1]: session-17.scope: Deactivated successfully. May 13 00:29:43.248574 systemd-logind[1515]: Session 17 logged out. Waiting for processes to exit. May 13 00:29:43.249630 systemd-logind[1515]: Removed session 17. May 13 00:29:43.279386 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 43682 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:43.280652 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:43.285175 systemd-logind[1515]: New session 18 of user core. May 13 00:29:43.294550 systemd[1]: Started session-18.scope - Session 18 of User core. May 13 00:29:43.398643 sshd[4283]: pam_unix(sshd:session): session closed for user core May 13 00:29:43.401841 systemd[1]: sshd@17-10.0.0.98:22-10.0.0.1:43682.service: Deactivated successfully. May 13 00:29:43.404533 systemd[1]: session-18.scope: Deactivated successfully. May 13 00:29:43.405584 systemd-logind[1515]: Session 18 logged out. Waiting for processes to exit. May 13 00:29:43.406261 systemd-logind[1515]: Removed session 18. May 13 00:29:48.409800 systemd[1]: Started sshd@18-10.0.0.98:22-10.0.0.1:43684.service - OpenSSH per-connection server daemon (10.0.0.1:43684). May 13 00:29:48.443191 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 43684 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:48.444410 sshd[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:48.448158 systemd-logind[1515]: New session 19 of user core. May 13 00:29:48.458599 systemd[1]: Started session-19.scope - Session 19 of User core. May 13 00:29:48.561264 sshd[4305]: pam_unix(sshd:session): session closed for user core May 13 00:29:48.563857 systemd[1]: sshd@18-10.0.0.98:22-10.0.0.1:43684.service: Deactivated successfully. May 13 00:29:48.566680 systemd-logind[1515]: Session 19 logged out. Waiting for processes to exit. May 13 00:29:48.567333 systemd[1]: session-19.scope: Deactivated successfully. May 13 00:29:48.568158 systemd-logind[1515]: Removed session 19. May 13 00:29:53.576532 systemd[1]: Started sshd@19-10.0.0.98:22-10.0.0.1:35834.service - OpenSSH per-connection server daemon (10.0.0.1:35834). May 13 00:29:53.613573 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 35834 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:53.614788 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:53.618904 systemd-logind[1515]: New session 20 of user core. May 13 00:29:53.629756 systemd[1]: Started session-20.scope - Session 20 of User core. May 13 00:29:53.748647 sshd[4322]: pam_unix(sshd:session): session closed for user core May 13 00:29:53.751270 systemd[1]: sshd@19-10.0.0.98:22-10.0.0.1:35834.service: Deactivated successfully. May 13 00:29:53.753898 systemd[1]: session-20.scope: Deactivated successfully. May 13 00:29:53.755612 systemd-logind[1515]: Session 20 logged out. Waiting for processes to exit. May 13 00:29:53.758593 systemd-logind[1515]: Removed session 20. May 13 00:29:58.766581 systemd[1]: Started sshd@20-10.0.0.98:22-10.0.0.1:35844.service - OpenSSH per-connection server daemon (10.0.0.1:35844). May 13 00:29:58.804542 sshd[4337]: Accepted publickey for core from 10.0.0.1 port 35844 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:58.805012 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:58.809526 systemd-logind[1515]: New session 21 of user core. May 13 00:29:58.819607 systemd[1]: Started session-21.scope - Session 21 of User core. May 13 00:29:58.938159 sshd[4337]: pam_unix(sshd:session): session closed for user core May 13 00:29:58.947616 systemd[1]: Started sshd@21-10.0.0.98:22-10.0.0.1:35860.service - OpenSSH per-connection server daemon (10.0.0.1:35860). May 13 00:29:58.948038 systemd[1]: sshd@20-10.0.0.98:22-10.0.0.1:35844.service: Deactivated successfully. May 13 00:29:58.953329 systemd-logind[1515]: Session 21 logged out. Waiting for processes to exit. May 13 00:29:58.953976 systemd[1]: session-21.scope: Deactivated successfully. May 13 00:29:58.956394 systemd-logind[1515]: Removed session 21. May 13 00:29:58.987760 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 35860 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:29:58.989367 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:29:58.993699 systemd-logind[1515]: New session 22 of user core. May 13 00:29:59.002588 systemd[1]: Started session-22.scope - Session 22 of User core. May 13 00:30:00.861979 containerd[1535]: time="2025-05-13T00:30:00.861935212Z" level=info msg="StopContainer for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" with timeout 30 (s)" May 13 00:30:00.863008 containerd[1535]: time="2025-05-13T00:30:00.862626639Z" level=info msg="Stop container \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" with signal terminated" May 13 00:30:00.910855 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756-rootfs.mount: Deactivated successfully. May 13 00:30:00.912637 containerd[1535]: time="2025-05-13T00:30:00.911758809Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:30:00.913616 containerd[1535]: time="2025-05-13T00:30:00.913491575Z" level=info msg="StopContainer for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" with timeout 2 (s)" May 13 00:30:00.913991 containerd[1535]: time="2025-05-13T00:30:00.913917727Z" level=info msg="Stop container \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" with signal terminated" May 13 00:30:00.921160 systemd-networkd[1228]: lxc_health: Link DOWN May 13 00:30:00.921175 systemd-networkd[1228]: lxc_health: Lost carrier May 13 00:30:00.924281 containerd[1535]: time="2025-05-13T00:30:00.923789616Z" level=info msg="shim disconnected" id=d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756 namespace=k8s.io May 13 00:30:00.924281 containerd[1535]: time="2025-05-13T00:30:00.923853335Z" level=warning msg="cleaning up after shim disconnected" id=d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756 namespace=k8s.io May 13 00:30:00.924281 containerd[1535]: time="2025-05-13T00:30:00.923862454Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:00.959072 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1-rootfs.mount: Deactivated successfully. May 13 00:30:00.964130 containerd[1535]: time="2025-05-13T00:30:00.964001398Z" level=info msg="StopContainer for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" returns successfully" May 13 00:30:00.964702 containerd[1535]: time="2025-05-13T00:30:00.964629906Z" level=info msg="StopPodSandbox for \"99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3\"" May 13 00:30:00.964702 containerd[1535]: time="2025-05-13T00:30:00.964684945Z" level=info msg="Container to stop \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:00.964910 containerd[1535]: time="2025-05-13T00:30:00.964873221Z" level=info msg="shim disconnected" id=587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1 namespace=k8s.io May 13 00:30:00.964992 containerd[1535]: time="2025-05-13T00:30:00.964969739Z" level=warning msg="cleaning up after shim disconnected" id=587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1 namespace=k8s.io May 13 00:30:00.964992 containerd[1535]: time="2025-05-13T00:30:00.964989259Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:00.966682 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3-shm.mount: Deactivated successfully. May 13 00:30:00.979205 containerd[1535]: time="2025-05-13T00:30:00.979163065Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:30:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:30:00.983671 containerd[1535]: time="2025-05-13T00:30:00.983616859Z" level=info msg="StopContainer for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" returns successfully" May 13 00:30:00.984265 containerd[1535]: time="2025-05-13T00:30:00.984224047Z" level=info msg="StopPodSandbox for \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\"" May 13 00:30:00.984375 containerd[1535]: time="2025-05-13T00:30:00.984271606Z" level=info msg="Container to stop \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:00.984375 containerd[1535]: time="2025-05-13T00:30:00.984285166Z" level=info msg="Container to stop \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:00.984375 containerd[1535]: time="2025-05-13T00:30:00.984294805Z" level=info msg="Container to stop \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:00.984375 containerd[1535]: time="2025-05-13T00:30:00.984358004Z" level=info msg="Container to stop \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:00.984375 containerd[1535]: time="2025-05-13T00:30:00.984369884Z" level=info msg="Container to stop \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:30:00.986375 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9-shm.mount: Deactivated successfully. May 13 00:30:01.001984 containerd[1535]: time="2025-05-13T00:30:01.001922145Z" level=info msg="shim disconnected" id=99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3 namespace=k8s.io May 13 00:30:01.001984 containerd[1535]: time="2025-05-13T00:30:01.001978744Z" level=warning msg="cleaning up after shim disconnected" id=99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3 namespace=k8s.io May 13 00:30:01.001984 containerd[1535]: time="2025-05-13T00:30:01.001990144Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:01.017348 containerd[1535]: time="2025-05-13T00:30:01.017187827Z" level=info msg="TearDown network for sandbox \"99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3\" successfully" May 13 00:30:01.017348 containerd[1535]: time="2025-05-13T00:30:01.017224427Z" level=info msg="StopPodSandbox for \"99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3\" returns successfully" May 13 00:30:01.018045 containerd[1535]: time="2025-05-13T00:30:01.017287746Z" level=info msg="shim disconnected" id=f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9 namespace=k8s.io May 13 00:30:01.018045 containerd[1535]: time="2025-05-13T00:30:01.017443343Z" level=warning msg="cleaning up after shim disconnected" id=f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9 namespace=k8s.io May 13 00:30:01.018045 containerd[1535]: time="2025-05-13T00:30:01.017452583Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:01.032289 containerd[1535]: time="2025-05-13T00:30:01.032239994Z" level=info msg="TearDown network for sandbox \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" successfully" May 13 00:30:01.032289 containerd[1535]: time="2025-05-13T00:30:01.032276393Z" level=info msg="StopPodSandbox for \"f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9\" returns successfully" May 13 00:30:01.200012 kubelet[2718]: I0513 00:30:01.199962 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-kernel\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.200012 kubelet[2718]: I0513 00:30:01.200009 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-xtables-lock\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201526 kubelet[2718]: I0513 00:30:01.200028 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-cgroup\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201526 kubelet[2718]: I0513 00:30:01.200051 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zqq9\" (UniqueName: \"kubernetes.io/projected/14ebc8c6-f9c6-4fa1-9f28-915be5214360-kube-api-access-9zqq9\") pod \"14ebc8c6-f9c6-4fa1-9f28-915be5214360\" (UID: \"14ebc8c6-f9c6-4fa1-9f28-915be5214360\") " May 13 00:30:01.201526 kubelet[2718]: I0513 00:30:01.200066 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-lib-modules\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201526 kubelet[2718]: I0513 00:30:01.200083 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hpl4\" (UniqueName: \"kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-kube-api-access-6hpl4\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201526 kubelet[2718]: I0513 00:30:01.200098 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-bpf-maps\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201526 kubelet[2718]: I0513 00:30:01.200115 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-config-path\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201742 kubelet[2718]: I0513 00:30:01.200142 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-run\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201742 kubelet[2718]: I0513 00:30:01.200159 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-etc-cni-netd\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201742 kubelet[2718]: I0513 00:30:01.200173 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-net\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201742 kubelet[2718]: I0513 00:30:01.200213 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hubble-tls\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201742 kubelet[2718]: I0513 00:30:01.200232 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14ebc8c6-f9c6-4fa1-9f28-915be5214360-cilium-config-path\") pod \"14ebc8c6-f9c6-4fa1-9f28-915be5214360\" (UID: \"14ebc8c6-f9c6-4fa1-9f28-915be5214360\") " May 13 00:30:01.201742 kubelet[2718]: I0513 00:30:01.200246 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cni-path\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201869 kubelet[2718]: I0513 00:30:01.200261 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hostproc\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.201869 kubelet[2718]: I0513 00:30:01.200277 2718 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-clustermesh-secrets\") pod \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\" (UID: \"c4bc0836-86e1-4a9c-b262-c783d9fbb9c7\") " May 13 00:30:01.203364 kubelet[2718]: I0513 00:30:01.203057 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.203364 kubelet[2718]: I0513 00:30:01.203098 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.203364 kubelet[2718]: I0513 00:30:01.203062 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.207873 kubelet[2718]: I0513 00:30:01.207827 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208155 kubelet[2718]: I0513 00:30:01.208047 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208155 kubelet[2718]: I0513 00:30:01.208089 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208384 kubelet[2718]: I0513 00:30:01.208353 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208529 kubelet[2718]: I0513 00:30:01.208500 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208671 kubelet[2718]: I0513 00:30:01.208604 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208892 kubelet[2718]: I0513 00:30:01.208825 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:30:01.208892 kubelet[2718]: I0513 00:30:01.208846 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14ebc8c6-f9c6-4fa1-9f28-915be5214360-kube-api-access-9zqq9" (OuterVolumeSpecName: "kube-api-access-9zqq9") pod "14ebc8c6-f9c6-4fa1-9f28-915be5214360" (UID: "14ebc8c6-f9c6-4fa1-9f28-915be5214360"). InnerVolumeSpecName "kube-api-access-9zqq9". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:30:01.209742 kubelet[2718]: I0513 00:30:01.209700 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:30:01.209742 kubelet[2718]: I0513 00:30:01.209729 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-kube-api-access-6hpl4" (OuterVolumeSpecName: "kube-api-access-6hpl4") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "kube-api-access-6hpl4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:30:01.211058 kubelet[2718]: I0513 00:30:01.210942 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14ebc8c6-f9c6-4fa1-9f28-915be5214360-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "14ebc8c6-f9c6-4fa1-9f28-915be5214360" (UID: "14ebc8c6-f9c6-4fa1-9f28-915be5214360"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:30:01.211596 kubelet[2718]: I0513 00:30:01.211563 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:30:01.212280 kubelet[2718]: I0513 00:30:01.212213 2718 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" (UID: "c4bc0836-86e1-4a9c-b262-c783d9fbb9c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:30:01.300586 kubelet[2718]: I0513 00:30:01.300548 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300586 kubelet[2718]: I0513 00:30:01.300585 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-run\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300586 kubelet[2718]: I0513 00:30:01.300595 2718 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300586 kubelet[2718]: I0513 00:30:01.300603 2718 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300610 2718 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300618 2718 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cni-path\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300635 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/14ebc8c6-f9c6-4fa1-9f28-915be5214360-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300647 2718 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300664 2718 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-hostproc\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300673 2718 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300681 2718 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.300821 kubelet[2718]: I0513 00:30:01.300689 2718 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.301027 kubelet[2718]: I0513 00:30:01.300697 2718 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9zqq9\" (UniqueName: \"kubernetes.io/projected/14ebc8c6-f9c6-4fa1-9f28-915be5214360-kube-api-access-9zqq9\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.301027 kubelet[2718]: I0513 00:30:01.300705 2718 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-lib-modules\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.301027 kubelet[2718]: I0513 00:30:01.300714 2718 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6hpl4\" (UniqueName: \"kubernetes.io/projected/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-kube-api-access-6hpl4\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.301027 kubelet[2718]: I0513 00:30:01.300721 2718 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 13 00:30:01.433827 kubelet[2718]: E0513 00:30:01.433785 2718 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:30:01.544825 kubelet[2718]: I0513 00:30:01.544718 2718 scope.go:117] "RemoveContainer" containerID="587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1" May 13 00:30:01.548015 containerd[1535]: time="2025-05-13T00:30:01.547828291Z" level=info msg="RemoveContainer for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\"" May 13 00:30:01.551233 containerd[1535]: time="2025-05-13T00:30:01.551193590Z" level=info msg="RemoveContainer for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" returns successfully" May 13 00:30:01.552326 kubelet[2718]: I0513 00:30:01.551930 2718 scope.go:117] "RemoveContainer" containerID="bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d" May 13 00:30:01.559246 containerd[1535]: time="2025-05-13T00:30:01.559218364Z" level=info msg="RemoveContainer for \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\"" May 13 00:30:01.564001 containerd[1535]: time="2025-05-13T00:30:01.563973918Z" level=info msg="RemoveContainer for \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\" returns successfully" May 13 00:30:01.564191 kubelet[2718]: I0513 00:30:01.564167 2718 scope.go:117] "RemoveContainer" containerID="da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d" May 13 00:30:01.565200 containerd[1535]: time="2025-05-13T00:30:01.565172536Z" level=info msg="RemoveContainer for \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\"" May 13 00:30:01.569892 containerd[1535]: time="2025-05-13T00:30:01.569858850Z" level=info msg="RemoveContainer for \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\" returns successfully" May 13 00:30:01.570669 kubelet[2718]: I0513 00:30:01.570635 2718 scope.go:117] "RemoveContainer" containerID="cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700" May 13 00:30:01.571860 containerd[1535]: time="2025-05-13T00:30:01.571583299Z" level=info msg="RemoveContainer for \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\"" May 13 00:30:01.574697 containerd[1535]: time="2025-05-13T00:30:01.574423847Z" level=info msg="RemoveContainer for \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\" returns successfully" May 13 00:30:01.575143 kubelet[2718]: I0513 00:30:01.575117 2718 scope.go:117] "RemoveContainer" containerID="1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5" May 13 00:30:01.577872 containerd[1535]: time="2025-05-13T00:30:01.577828665Z" level=info msg="RemoveContainer for \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\"" May 13 00:30:01.582436 containerd[1535]: time="2025-05-13T00:30:01.582396062Z" level=info msg="RemoveContainer for \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\" returns successfully" May 13 00:30:01.582732 kubelet[2718]: I0513 00:30:01.582692 2718 scope.go:117] "RemoveContainer" containerID="587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1" May 13 00:30:01.583450 containerd[1535]: time="2025-05-13T00:30:01.583409044Z" level=error msg="ContainerStatus for \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\": not found" May 13 00:30:01.594948 kubelet[2718]: E0513 00:30:01.594895 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\": not found" containerID="587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1" May 13 00:30:01.595043 kubelet[2718]: I0513 00:30:01.594956 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1"} err="failed to get container status \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\": rpc error: code = NotFound desc = an error occurred when try to find container \"587926f3c20f79937bed46715b3c3909136d69579b937bebd867e0003f3a85e1\": not found" May 13 00:30:01.595068 kubelet[2718]: I0513 00:30:01.595045 2718 scope.go:117] "RemoveContainer" containerID="bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d" May 13 00:30:01.595439 containerd[1535]: time="2025-05-13T00:30:01.595339907Z" level=error msg="ContainerStatus for \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\": not found" May 13 00:30:01.595521 kubelet[2718]: E0513 00:30:01.595480 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\": not found" containerID="bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d" May 13 00:30:01.595521 kubelet[2718]: I0513 00:30:01.595499 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d"} err="failed to get container status \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"bad081893b44746eda9edc11a27bce179bd202d1dbd7b77a361d398a9875fd9d\": not found" May 13 00:30:01.595521 kubelet[2718]: I0513 00:30:01.595511 2718 scope.go:117] "RemoveContainer" containerID="da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d" May 13 00:30:01.595725 containerd[1535]: time="2025-05-13T00:30:01.595689940Z" level=error msg="ContainerStatus for \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\": not found" May 13 00:30:01.595820 kubelet[2718]: E0513 00:30:01.595795 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\": not found" containerID="da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d" May 13 00:30:01.595870 kubelet[2718]: I0513 00:30:01.595826 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d"} err="failed to get container status \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\": rpc error: code = NotFound desc = an error occurred when try to find container \"da0dff4c1ac9e7ae69cbe7a7f1d909c68ebba678c20f3143726e3e7d9b2d087d\": not found" May 13 00:30:01.595870 kubelet[2718]: I0513 00:30:01.595842 2718 scope.go:117] "RemoveContainer" containerID="cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700" May 13 00:30:01.596028 containerd[1535]: time="2025-05-13T00:30:01.595992175Z" level=error msg="ContainerStatus for \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\": not found" May 13 00:30:01.596223 kubelet[2718]: E0513 00:30:01.596127 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\": not found" containerID="cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700" May 13 00:30:01.596223 kubelet[2718]: I0513 00:30:01.596154 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700"} err="failed to get container status \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\": rpc error: code = NotFound desc = an error occurred when try to find container \"cceb3cdd9c86465d1f2049cfa0838d51adf61108054e57f223f1a5de5591c700\": not found" May 13 00:30:01.596223 kubelet[2718]: I0513 00:30:01.596171 2718 scope.go:117] "RemoveContainer" containerID="1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5" May 13 00:30:01.596610 containerd[1535]: time="2025-05-13T00:30:01.596500246Z" level=error msg="ContainerStatus for \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\": not found" May 13 00:30:01.596684 kubelet[2718]: E0513 00:30:01.596616 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\": not found" containerID="1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5" May 13 00:30:01.596684 kubelet[2718]: I0513 00:30:01.596637 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5"} err="failed to get container status \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f28cbfae5ea8c15840e24786d4b7b5f6d357a0a7b70e611106ba9aea873a0d5\": not found" May 13 00:30:01.596684 kubelet[2718]: I0513 00:30:01.596683 2718 scope.go:117] "RemoveContainer" containerID="d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756" May 13 00:30:01.597695 containerd[1535]: time="2025-05-13T00:30:01.597663024Z" level=info msg="RemoveContainer for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\"" May 13 00:30:01.600011 containerd[1535]: time="2025-05-13T00:30:01.599971062Z" level=info msg="RemoveContainer for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" returns successfully" May 13 00:30:01.600241 kubelet[2718]: I0513 00:30:01.600164 2718 scope.go:117] "RemoveContainer" containerID="d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756" May 13 00:30:01.600399 containerd[1535]: time="2025-05-13T00:30:01.600351176Z" level=error msg="ContainerStatus for \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\": not found" May 13 00:30:01.600500 kubelet[2718]: E0513 00:30:01.600479 2718 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\": not found" containerID="d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756" May 13 00:30:01.600545 kubelet[2718]: I0513 00:30:01.600506 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756"} err="failed to get container status \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4cb805ebf227d49e8e6ccf758123bd547d2b9cd2c04fd6609bcff9e68140756\": not found" May 13 00:30:01.890747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7b2d6ae3858702f8681c5013396be4c19847442f4fabd9b07d8fce92d8c5ec9-rootfs.mount: Deactivated successfully. May 13 00:30:01.890885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99145e9c5d85e68e922f6fd30e824549ba0f290206cc2eaf24d690c6818c13f3-rootfs.mount: Deactivated successfully. May 13 00:30:01.890975 systemd[1]: var-lib-kubelet-pods-c4bc0836\x2d86e1\x2d4a9c\x2db262\x2dc783d9fbb9c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6hpl4.mount: Deactivated successfully. May 13 00:30:01.891054 systemd[1]: var-lib-kubelet-pods-c4bc0836\x2d86e1\x2d4a9c\x2db262\x2dc783d9fbb9c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 13 00:30:01.891136 systemd[1]: var-lib-kubelet-pods-c4bc0836\x2d86e1\x2d4a9c\x2db262\x2dc783d9fbb9c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 13 00:30:01.891210 systemd[1]: var-lib-kubelet-pods-14ebc8c6\x2df9c6\x2d4fa1\x2d9f28\x2d915be5214360-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9zqq9.mount: Deactivated successfully. May 13 00:30:02.817726 sshd[4349]: pam_unix(sshd:session): session closed for user core May 13 00:30:02.825553 systemd[1]: Started sshd@22-10.0.0.98:22-10.0.0.1:39110.service - OpenSSH per-connection server daemon (10.0.0.1:39110). May 13 00:30:02.825934 systemd[1]: sshd@21-10.0.0.98:22-10.0.0.1:35860.service: Deactivated successfully. May 13 00:30:02.828741 systemd[1]: session-22.scope: Deactivated successfully. May 13 00:30:02.830768 systemd-logind[1515]: Session 22 logged out. Waiting for processes to exit. May 13 00:30:02.832005 systemd-logind[1515]: Removed session 22. May 13 00:30:02.859264 sshd[4518]: Accepted publickey for core from 10.0.0.1 port 39110 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:02.860644 sshd[4518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:02.864417 systemd-logind[1515]: New session 23 of user core. May 13 00:30:02.879688 systemd[1]: Started session-23.scope - Session 23 of User core. May 13 00:30:02.895543 kubelet[2718]: I0513 00:30:02.895485 2718 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-13T00:30:02Z","lastTransitionTime":"2025-05-13T00:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 13 00:30:03.377413 kubelet[2718]: I0513 00:30:03.377360 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14ebc8c6-f9c6-4fa1-9f28-915be5214360" path="/var/lib/kubelet/pods/14ebc8c6-f9c6-4fa1-9f28-915be5214360/volumes" May 13 00:30:03.377869 kubelet[2718]: I0513 00:30:03.377833 2718 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" path="/var/lib/kubelet/pods/c4bc0836-86e1-4a9c-b262-c783d9fbb9c7/volumes" May 13 00:30:04.025696 sshd[4518]: pam_unix(sshd:session): session closed for user core May 13 00:30:04.035997 systemd[1]: Started sshd@23-10.0.0.98:22-10.0.0.1:39116.service - OpenSSH per-connection server daemon (10.0.0.1:39116). May 13 00:30:04.036403 systemd[1]: sshd@22-10.0.0.98:22-10.0.0.1:39110.service: Deactivated successfully. May 13 00:30:04.048633 kubelet[2718]: I0513 00:30:04.044145 2718 topology_manager.go:215] "Topology Admit Handler" podUID="1b2ea4fe-7ab4-4102-a316-dee45a5182e7" podNamespace="kube-system" podName="cilium-bfh29" May 13 00:30:04.048633 kubelet[2718]: E0513 00:30:04.044328 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14ebc8c6-f9c6-4fa1-9f28-915be5214360" containerName="cilium-operator" May 13 00:30:04.048633 kubelet[2718]: E0513 00:30:04.044343 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" containerName="cilium-agent" May 13 00:30:04.048633 kubelet[2718]: E0513 00:30:04.044350 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" containerName="mount-bpf-fs" May 13 00:30:04.048633 kubelet[2718]: E0513 00:30:04.044356 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" containerName="clean-cilium-state" May 13 00:30:04.048633 kubelet[2718]: E0513 00:30:04.044363 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" containerName="mount-cgroup" May 13 00:30:04.048633 kubelet[2718]: E0513 00:30:04.044369 2718 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" containerName="apply-sysctl-overwrites" May 13 00:30:04.048633 kubelet[2718]: I0513 00:30:04.044390 2718 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4bc0836-86e1-4a9c-b262-c783d9fbb9c7" containerName="cilium-agent" May 13 00:30:04.048633 kubelet[2718]: I0513 00:30:04.044397 2718 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ebc8c6-f9c6-4fa1-9f28-915be5214360" containerName="cilium-operator" May 13 00:30:04.054113 systemd[1]: session-23.scope: Deactivated successfully. May 13 00:30:04.056323 systemd-logind[1515]: Session 23 logged out. Waiting for processes to exit. May 13 00:30:04.063412 systemd-logind[1515]: Removed session 23. May 13 00:30:04.097492 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 39116 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:04.099059 sshd[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:04.103085 systemd-logind[1515]: New session 24 of user core. May 13 00:30:04.114678 systemd[1]: Started session-24.scope - Session 24 of User core. May 13 00:30:04.167012 sshd[4532]: pam_unix(sshd:session): session closed for user core May 13 00:30:04.176567 systemd[1]: Started sshd@24-10.0.0.98:22-10.0.0.1:39124.service - OpenSSH per-connection server daemon (10.0.0.1:39124). May 13 00:30:04.176978 systemd[1]: sshd@23-10.0.0.98:22-10.0.0.1:39116.service: Deactivated successfully. May 13 00:30:04.180482 systemd-logind[1515]: Session 24 logged out. Waiting for processes to exit. May 13 00:30:04.180753 systemd[1]: session-24.scope: Deactivated successfully. May 13 00:30:04.184425 systemd-logind[1515]: Removed session 24. May 13 00:30:04.211735 sshd[4541]: Accepted publickey for core from 10.0.0.1 port 39124 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:30:04.212984 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:30:04.216285 kubelet[2718]: I0513 00:30:04.215935 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-host-proc-sys-net\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216285 kubelet[2718]: I0513 00:30:04.215981 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-host-proc-sys-kernel\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216285 kubelet[2718]: I0513 00:30:04.215998 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-clustermesh-secrets\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216285 kubelet[2718]: I0513 00:30:04.216014 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-cilium-run\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216285 kubelet[2718]: I0513 00:30:04.216033 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-lib-modules\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216489 kubelet[2718]: I0513 00:30:04.216050 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-cilium-config-path\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216489 kubelet[2718]: I0513 00:30:04.216065 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-bpf-maps\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216489 kubelet[2718]: I0513 00:30:04.216081 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-cni-path\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216489 kubelet[2718]: I0513 00:30:04.216096 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-cilium-ipsec-secrets\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216489 kubelet[2718]: I0513 00:30:04.216111 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-cilium-cgroup\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216489 kubelet[2718]: I0513 00:30:04.216128 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-hostproc\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216623 kubelet[2718]: I0513 00:30:04.216143 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-etc-cni-netd\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216623 kubelet[2718]: I0513 00:30:04.216158 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-xtables-lock\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216623 kubelet[2718]: I0513 00:30:04.216172 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-hubble-tls\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216623 kubelet[2718]: I0513 00:30:04.216188 2718 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79hrr\" (UniqueName: \"kubernetes.io/projected/1b2ea4fe-7ab4-4102-a316-dee45a5182e7-kube-api-access-79hrr\") pod \"cilium-bfh29\" (UID: \"1b2ea4fe-7ab4-4102-a316-dee45a5182e7\") " pod="kube-system/cilium-bfh29" May 13 00:30:04.216892 systemd-logind[1515]: New session 25 of user core. May 13 00:30:04.224536 systemd[1]: Started session-25.scope - Session 25 of User core. May 13 00:30:04.352580 kubelet[2718]: E0513 00:30:04.352348 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:04.353041 containerd[1535]: time="2025-05-13T00:30:04.352832998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bfh29,Uid:1b2ea4fe-7ab4-4102-a316-dee45a5182e7,Namespace:kube-system,Attempt:0,}" May 13 00:30:04.374910 containerd[1535]: time="2025-05-13T00:30:04.374804309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:30:04.374910 containerd[1535]: time="2025-05-13T00:30:04.374873588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:30:04.374910 containerd[1535]: time="2025-05-13T00:30:04.374885188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:04.375365 containerd[1535]: time="2025-05-13T00:30:04.374983666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:30:04.411525 containerd[1535]: time="2025-05-13T00:30:04.411482480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bfh29,Uid:1b2ea4fe-7ab4-4102-a316-dee45a5182e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\"" May 13 00:30:04.412140 kubelet[2718]: E0513 00:30:04.412120 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:04.415627 containerd[1535]: time="2025-05-13T00:30:04.415564659Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 13 00:30:04.425478 containerd[1535]: time="2025-05-13T00:30:04.425423071Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"377887d2b08e322eaed8f2407267c71057c26b91fe04ef560a15138ab88544c9\"" May 13 00:30:04.426151 containerd[1535]: time="2025-05-13T00:30:04.426127741Z" level=info msg="StartContainer for \"377887d2b08e322eaed8f2407267c71057c26b91fe04ef560a15138ab88544c9\"" May 13 00:30:04.480613 containerd[1535]: time="2025-05-13T00:30:04.480550486Z" level=info msg="StartContainer for \"377887d2b08e322eaed8f2407267c71057c26b91fe04ef560a15138ab88544c9\" returns successfully" May 13 00:30:04.524546 containerd[1535]: time="2025-05-13T00:30:04.524487788Z" level=info msg="shim disconnected" id=377887d2b08e322eaed8f2407267c71057c26b91fe04ef560a15138ab88544c9 namespace=k8s.io May 13 00:30:04.524546 containerd[1535]: time="2025-05-13T00:30:04.524541827Z" level=warning msg="cleaning up after shim disconnected" id=377887d2b08e322eaed8f2407267c71057c26b91fe04ef560a15138ab88544c9 namespace=k8s.io May 13 00:30:04.524546 containerd[1535]: time="2025-05-13T00:30:04.524550947Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:04.556637 kubelet[2718]: E0513 00:30:04.556464 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:04.558525 containerd[1535]: time="2025-05-13T00:30:04.558486439Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 13 00:30:04.565916 containerd[1535]: time="2025-05-13T00:30:04.565878889Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7e645f98d6b1eb442a1925bf2e1ad6192b4990c5a0d31d09bc0e0e45cfad3a6e\"" May 13 00:30:04.567255 containerd[1535]: time="2025-05-13T00:30:04.566456720Z" level=info msg="StartContainer for \"7e645f98d6b1eb442a1925bf2e1ad6192b4990c5a0d31d09bc0e0e45cfad3a6e\"" May 13 00:30:04.615675 containerd[1535]: time="2025-05-13T00:30:04.615561745Z" level=info msg="StartContainer for \"7e645f98d6b1eb442a1925bf2e1ad6192b4990c5a0d31d09bc0e0e45cfad3a6e\" returns successfully" May 13 00:30:04.648819 containerd[1535]: time="2025-05-13T00:30:04.648722889Z" level=info msg="shim disconnected" id=7e645f98d6b1eb442a1925bf2e1ad6192b4990c5a0d31d09bc0e0e45cfad3a6e namespace=k8s.io May 13 00:30:04.648819 containerd[1535]: time="2025-05-13T00:30:04.648809687Z" level=warning msg="cleaning up after shim disconnected" id=7e645f98d6b1eb442a1925bf2e1ad6192b4990c5a0d31d09bc0e0e45cfad3a6e namespace=k8s.io May 13 00:30:04.649151 containerd[1535]: time="2025-05-13T00:30:04.648819407Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:05.562326 kubelet[2718]: E0513 00:30:05.561759 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:05.564719 containerd[1535]: time="2025-05-13T00:30:05.564679664Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 13 00:30:05.578940 containerd[1535]: time="2025-05-13T00:30:05.578889345Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b115c4e553d8149b78b5468689c7a02f8a12bebc4b4b65d40873880ba0bc0f80\"" May 13 00:30:05.579427 containerd[1535]: time="2025-05-13T00:30:05.579392938Z" level=info msg="StartContainer for \"b115c4e553d8149b78b5468689c7a02f8a12bebc4b4b65d40873880ba0bc0f80\"" May 13 00:30:05.628472 containerd[1535]: time="2025-05-13T00:30:05.628435054Z" level=info msg="StartContainer for \"b115c4e553d8149b78b5468689c7a02f8a12bebc4b4b65d40873880ba0bc0f80\" returns successfully" May 13 00:30:05.650101 containerd[1535]: time="2025-05-13T00:30:05.650030952Z" level=info msg="shim disconnected" id=b115c4e553d8149b78b5468689c7a02f8a12bebc4b4b65d40873880ba0bc0f80 namespace=k8s.io May 13 00:30:05.650101 containerd[1535]: time="2025-05-13T00:30:05.650091551Z" level=warning msg="cleaning up after shim disconnected" id=b115c4e553d8149b78b5468689c7a02f8a12bebc4b4b65d40873880ba0bc0f80 namespace=k8s.io May 13 00:30:05.650101 containerd[1535]: time="2025-05-13T00:30:05.650100391Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:05.659136 containerd[1535]: time="2025-05-13T00:30:05.659088026Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:30:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:30:06.321616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b115c4e553d8149b78b5468689c7a02f8a12bebc4b4b65d40873880ba0bc0f80-rootfs.mount: Deactivated successfully. May 13 00:30:06.435461 kubelet[2718]: E0513 00:30:06.435417 2718 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 13 00:30:06.566142 kubelet[2718]: E0513 00:30:06.565361 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:06.567502 containerd[1535]: time="2025-05-13T00:30:06.567466979Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 13 00:30:06.580710 containerd[1535]: time="2025-05-13T00:30:06.580609688Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dd14c5f450c8a732f4259e15d15bcd8a3ddcb5ed82b04eee4b735f2e6549ed58\"" May 13 00:30:06.581855 containerd[1535]: time="2025-05-13T00:30:06.581813833Z" level=info msg="StartContainer for \"dd14c5f450c8a732f4259e15d15bcd8a3ddcb5ed82b04eee4b735f2e6549ed58\"" May 13 00:30:06.624372 containerd[1535]: time="2025-05-13T00:30:06.624315761Z" level=info msg="StartContainer for \"dd14c5f450c8a732f4259e15d15bcd8a3ddcb5ed82b04eee4b735f2e6549ed58\" returns successfully" May 13 00:30:06.645523 containerd[1535]: time="2025-05-13T00:30:06.645465086Z" level=info msg="shim disconnected" id=dd14c5f450c8a732f4259e15d15bcd8a3ddcb5ed82b04eee4b735f2e6549ed58 namespace=k8s.io May 13 00:30:06.645523 containerd[1535]: time="2025-05-13T00:30:06.645520406Z" level=warning msg="cleaning up after shim disconnected" id=dd14c5f450c8a732f4259e15d15bcd8a3ddcb5ed82b04eee4b735f2e6549ed58 namespace=k8s.io May 13 00:30:06.645523 containerd[1535]: time="2025-05-13T00:30:06.645531845Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:30:07.321605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd14c5f450c8a732f4259e15d15bcd8a3ddcb5ed82b04eee4b735f2e6549ed58-rootfs.mount: Deactivated successfully. May 13 00:30:07.569372 kubelet[2718]: E0513 00:30:07.569327 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:07.571266 containerd[1535]: time="2025-05-13T00:30:07.571219767Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 13 00:30:07.583138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3463891409.mount: Deactivated successfully. May 13 00:30:07.588702 containerd[1535]: time="2025-05-13T00:30:07.588640598Z" level=info msg="CreateContainer within sandbox \"4fa43574f468318a2e1934e6f44b4ac45b818fb67c7dc48c4f2efc980bbd99f8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c86e88ccb13b235ad059b70c5c60bdfb4d4f3f915d23abd77aec786cf2450589\"" May 13 00:30:07.589269 containerd[1535]: time="2025-05-13T00:30:07.589176671Z" level=info msg="StartContainer for \"c86e88ccb13b235ad059b70c5c60bdfb4d4f3f915d23abd77aec786cf2450589\"" May 13 00:30:07.634192 containerd[1535]: time="2025-05-13T00:30:07.634146450Z" level=info msg="StartContainer for \"c86e88ccb13b235ad059b70c5c60bdfb4d4f3f915d23abd77aec786cf2450589\" returns successfully" May 13 00:30:07.903333 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 13 00:30:08.574670 kubelet[2718]: E0513 00:30:08.574243 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:08.603182 kubelet[2718]: I0513 00:30:08.603121 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bfh29" podStartSLOduration=4.603102743 podStartE2EDuration="4.603102743s" podCreationTimestamp="2025-05-13 00:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:30:08.602381471 +0000 UTC m=+77.319514286" watchObservedRunningTime="2025-05-13 00:30:08.603102743 +0000 UTC m=+77.320235558" May 13 00:30:10.354254 kubelet[2718]: E0513 00:30:10.354212 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:10.586402 systemd[1]: run-containerd-runc-k8s.io-c86e88ccb13b235ad059b70c5c60bdfb4d4f3f915d23abd77aec786cf2450589-runc.Et4R43.mount: Deactivated successfully. May 13 00:30:10.727610 systemd-networkd[1228]: lxc_health: Link UP May 13 00:30:10.732041 systemd-networkd[1228]: lxc_health: Gained carrier May 13 00:30:12.354496 kubelet[2718]: E0513 00:30:12.354417 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:12.582190 kubelet[2718]: E0513 00:30:12.582140 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:12.605420 systemd-networkd[1228]: lxc_health: Gained IPv6LL May 13 00:30:13.584031 kubelet[2718]: E0513 00:30:13.583828 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:14.372960 kubelet[2718]: E0513 00:30:14.372925 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:30:16.972437 sshd[4541]: pam_unix(sshd:session): session closed for user core May 13 00:30:16.975129 systemd[1]: sshd@24-10.0.0.98:22-10.0.0.1:39124.service: Deactivated successfully. May 13 00:30:16.978070 systemd-logind[1515]: Session 25 logged out. Waiting for processes to exit. May 13 00:30:16.979209 systemd[1]: session-25.scope: Deactivated successfully. May 13 00:30:16.980537 systemd-logind[1515]: Removed session 25.