May 15 00:36:19.896993 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 00:36:19.897015 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed May 14 22:53:13 -00 2025 May 15 00:36:19.897024 kernel: KASLR enabled May 15 00:36:19.897030 kernel: efi: EFI v2.7 by EDK II May 15 00:36:19.897036 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 15 00:36:19.897042 kernel: random: crng init done May 15 00:36:19.897050 kernel: ACPI: Early table checksum verification disabled May 15 00:36:19.897056 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 15 00:36:19.897062 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 00:36:19.897070 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897077 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897083 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897089 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897096 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897103 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897112 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897119 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897125 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:36:19.897132 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 00:36:19.897139 kernel: NUMA: Failed to initialise from firmware May 15 00:36:19.897146 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:36:19.897152 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 00:36:19.897159 kernel: Zone ranges: May 15 00:36:19.897166 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:36:19.897172 kernel: DMA32 empty May 15 00:36:19.897180 kernel: Normal empty May 15 00:36:19.897187 kernel: Movable zone start for each node May 15 00:36:19.897194 kernel: Early memory node ranges May 15 00:36:19.897200 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 00:36:19.897207 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 00:36:19.897214 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 00:36:19.897221 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 00:36:19.897227 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 00:36:19.897234 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 00:36:19.897241 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 00:36:19.897248 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 00:36:19.897254 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 00:36:19.897262 kernel: psci: probing for conduit method from ACPI. May 15 00:36:19.897269 kernel: psci: PSCIv1.1 detected in firmware. May 15 00:36:19.897276 kernel: psci: Using standard PSCI v0.2 function IDs May 15 00:36:19.897285 kernel: psci: Trusted OS migration not required May 15 00:36:19.897292 kernel: psci: SMC Calling Convention v1.1 May 15 00:36:19.897299 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 00:36:19.897308 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 00:36:19.897315 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 00:36:19.897322 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 00:36:19.897329 kernel: Detected PIPT I-cache on CPU0 May 15 00:36:19.897336 kernel: CPU features: detected: GIC system register CPU interface May 15 00:36:19.897343 kernel: CPU features: detected: Hardware dirty bit management May 15 00:36:19.897351 kernel: CPU features: detected: Spectre-v4 May 15 00:36:19.897358 kernel: CPU features: detected: Spectre-BHB May 15 00:36:19.897365 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 00:36:19.897372 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 00:36:19.897380 kernel: CPU features: detected: ARM erratum 1418040 May 15 00:36:19.897387 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 00:36:19.897394 kernel: alternatives: applying boot alternatives May 15 00:36:19.897402 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:36:19.897410 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:36:19.897417 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:36:19.897424 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:36:19.897431 kernel: Fallback order for Node 0: 0 May 15 00:36:19.897438 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 00:36:19.897445 kernel: Policy zone: DMA May 15 00:36:19.897452 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:36:19.897460 kernel: software IO TLB: area num 4. May 15 00:36:19.897468 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 00:36:19.897475 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 15 00:36:19.897483 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:36:19.897490 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 00:36:19.897497 kernel: rcu: RCU event tracing is enabled. May 15 00:36:19.897505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:36:19.897512 kernel: Trampoline variant of Tasks RCU enabled. May 15 00:36:19.897520 kernel: Tracing variant of Tasks RCU enabled. May 15 00:36:19.897527 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:36:19.897534 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:36:19.897541 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 00:36:19.897549 kernel: GICv3: 256 SPIs implemented May 15 00:36:19.897556 kernel: GICv3: 0 Extended SPIs implemented May 15 00:36:19.897564 kernel: Root IRQ handler: gic_handle_irq May 15 00:36:19.897571 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 00:36:19.897578 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 00:36:19.897585 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 00:36:19.897592 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 00:36:19.897599 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 00:36:19.897607 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 00:36:19.897614 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 00:36:19.897621 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 00:36:19.897629 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:36:19.897636 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 00:36:19.897644 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 00:36:19.897651 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 00:36:19.897658 kernel: arm-pv: using stolen time PV May 15 00:36:19.897674 kernel: Console: colour dummy device 80x25 May 15 00:36:19.897682 kernel: ACPI: Core revision 20230628 May 15 00:36:19.897690 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 00:36:19.897697 kernel: pid_max: default: 32768 minimum: 301 May 15 00:36:19.897704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 00:36:19.897713 kernel: landlock: Up and running. May 15 00:36:19.897721 kernel: SELinux: Initializing. May 15 00:36:19.897728 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:36:19.897736 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:36:19.897743 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 00:36:19.897751 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:36:19.897758 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 00:36:19.897765 kernel: rcu: Hierarchical SRCU implementation. May 15 00:36:19.897773 kernel: rcu: Max phase no-delay instances is 400. May 15 00:36:19.897781 kernel: Platform MSI: ITS@0x8080000 domain created May 15 00:36:19.897789 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 00:36:19.897801 kernel: Remapping and enabling EFI services. May 15 00:36:19.897808 kernel: smp: Bringing up secondary CPUs ... May 15 00:36:19.897816 kernel: Detected PIPT I-cache on CPU1 May 15 00:36:19.897823 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 00:36:19.897830 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 00:36:19.897838 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:36:19.897845 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 00:36:19.897854 kernel: Detected PIPT I-cache on CPU2 May 15 00:36:19.897861 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 00:36:19.897869 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 00:36:19.897881 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:36:19.897889 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 00:36:19.897897 kernel: Detected PIPT I-cache on CPU3 May 15 00:36:19.897904 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 00:36:19.897912 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 00:36:19.897920 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 00:36:19.897927 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 00:36:19.897935 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:36:19.897944 kernel: SMP: Total of 4 processors activated. May 15 00:36:19.897952 kernel: CPU features: detected: 32-bit EL0 Support May 15 00:36:19.897959 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 00:36:19.897967 kernel: CPU features: detected: Common not Private translations May 15 00:36:19.897975 kernel: CPU features: detected: CRC32 instructions May 15 00:36:19.897982 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 00:36:19.897991 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 00:36:19.897999 kernel: CPU features: detected: LSE atomic instructions May 15 00:36:19.898007 kernel: CPU features: detected: Privileged Access Never May 15 00:36:19.898014 kernel: CPU features: detected: RAS Extension Support May 15 00:36:19.898022 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 00:36:19.898030 kernel: CPU: All CPU(s) started at EL1 May 15 00:36:19.898037 kernel: alternatives: applying system-wide alternatives May 15 00:36:19.898045 kernel: devtmpfs: initialized May 15 00:36:19.898053 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:36:19.898061 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:36:19.898070 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:36:19.898078 kernel: SMBIOS 3.0.0 present. May 15 00:36:19.898086 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 15 00:36:19.898093 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:36:19.898101 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 00:36:19.898109 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 00:36:19.898117 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 00:36:19.898124 kernel: audit: initializing netlink subsys (disabled) May 15 00:36:19.898134 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 15 00:36:19.898141 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:36:19.898149 kernel: cpuidle: using governor menu May 15 00:36:19.898157 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 00:36:19.898164 kernel: ASID allocator initialised with 32768 entries May 15 00:36:19.898172 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:36:19.898180 kernel: Serial: AMBA PL011 UART driver May 15 00:36:19.898188 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 00:36:19.898196 kernel: Modules: 0 pages in range for non-PLT usage May 15 00:36:19.898205 kernel: Modules: 509008 pages in range for PLT usage May 15 00:36:19.898212 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:36:19.898220 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 00:36:19.898228 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 00:36:19.898236 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 00:36:19.898243 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:36:19.898251 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 00:36:19.898259 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 00:36:19.898266 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 00:36:19.898274 kernel: ACPI: Added _OSI(Module Device) May 15 00:36:19.898283 kernel: ACPI: Added _OSI(Processor Device) May 15 00:36:19.898290 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:36:19.898298 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:36:19.898306 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:36:19.898314 kernel: ACPI: Interpreter enabled May 15 00:36:19.898321 kernel: ACPI: Using GIC for interrupt routing May 15 00:36:19.898329 kernel: ACPI: MCFG table detected, 1 entries May 15 00:36:19.898337 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 00:36:19.898344 kernel: printk: console [ttyAMA0] enabled May 15 00:36:19.898353 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:36:19.898479 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:36:19.898558 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 00:36:19.898628 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 00:36:19.898735 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 00:36:19.898815 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 00:36:19.898826 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 00:36:19.898838 kernel: PCI host bridge to bus 0000:00 May 15 00:36:19.898913 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 00:36:19.898977 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 00:36:19.899039 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 00:36:19.899102 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:36:19.899185 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 00:36:19.899329 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:36:19.899408 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 00:36:19.899478 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 00:36:19.899548 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:36:19.899617 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 00:36:19.899704 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 00:36:19.899775 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 00:36:19.899862 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 00:36:19.899926 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 00:36:19.899988 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 00:36:19.899998 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 00:36:19.900006 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 00:36:19.900014 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 00:36:19.900022 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 00:36:19.900030 kernel: iommu: Default domain type: Translated May 15 00:36:19.900040 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 00:36:19.900048 kernel: efivars: Registered efivars operations May 15 00:36:19.900055 kernel: vgaarb: loaded May 15 00:36:19.900063 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 00:36:19.900071 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:36:19.900082 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:36:19.900090 kernel: pnp: PnP ACPI init May 15 00:36:19.900170 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 00:36:19.900182 kernel: pnp: PnP ACPI: found 1 devices May 15 00:36:19.900192 kernel: NET: Registered PF_INET protocol family May 15 00:36:19.900200 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:36:19.900208 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:36:19.900216 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:36:19.900223 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:36:19.900231 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 00:36:19.900239 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:36:19.900247 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:36:19.900256 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:36:19.900264 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:36:19.900272 kernel: PCI: CLS 0 bytes, default 64 May 15 00:36:19.900279 kernel: kvm [1]: HYP mode not available May 15 00:36:19.900287 kernel: Initialise system trusted keyrings May 15 00:36:19.900295 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:36:19.900303 kernel: Key type asymmetric registered May 15 00:36:19.900310 kernel: Asymmetric key parser 'x509' registered May 15 00:36:19.900318 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 00:36:19.900327 kernel: io scheduler mq-deadline registered May 15 00:36:19.900335 kernel: io scheduler kyber registered May 15 00:36:19.900343 kernel: io scheduler bfq registered May 15 00:36:19.900351 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 00:36:19.900359 kernel: ACPI: button: Power Button [PWRB] May 15 00:36:19.900367 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 00:36:19.900438 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 00:36:19.900449 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:36:19.900457 kernel: thunder_xcv, ver 1.0 May 15 00:36:19.900464 kernel: thunder_bgx, ver 1.0 May 15 00:36:19.900474 kernel: nicpf, ver 1.0 May 15 00:36:19.900481 kernel: nicvf, ver 1.0 May 15 00:36:19.900572 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 00:36:19.900638 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T00:36:19 UTC (1747269379) May 15 00:36:19.900648 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 00:36:19.900656 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 00:36:19.900675 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 00:36:19.900687 kernel: watchdog: Hard watchdog permanently disabled May 15 00:36:19.900698 kernel: NET: Registered PF_INET6 protocol family May 15 00:36:19.900706 kernel: Segment Routing with IPv6 May 15 00:36:19.900713 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:36:19.900721 kernel: NET: Registered PF_PACKET protocol family May 15 00:36:19.900729 kernel: Key type dns_resolver registered May 15 00:36:19.900736 kernel: registered taskstats version 1 May 15 00:36:19.900744 kernel: Loading compiled-in X.509 certificates May 15 00:36:19.900752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 6afb3c096bffb4980a4bcc170ebe3729821d8e0d' May 15 00:36:19.900759 kernel: Key type .fscrypt registered May 15 00:36:19.900768 kernel: Key type fscrypt-provisioning registered May 15 00:36:19.900776 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:36:19.900784 kernel: ima: Allocated hash algorithm: sha1 May 15 00:36:19.900796 kernel: ima: No architecture policies found May 15 00:36:19.900805 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 00:36:19.900812 kernel: clk: Disabling unused clocks May 15 00:36:19.900820 kernel: Freeing unused kernel memory: 39424K May 15 00:36:19.900828 kernel: Run /init as init process May 15 00:36:19.900836 kernel: with arguments: May 15 00:36:19.900844 kernel: /init May 15 00:36:19.900852 kernel: with environment: May 15 00:36:19.900859 kernel: HOME=/ May 15 00:36:19.900867 kernel: TERM=linux May 15 00:36:19.900874 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:36:19.900884 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:36:19.900894 systemd[1]: Detected virtualization kvm. May 15 00:36:19.900904 systemd[1]: Detected architecture arm64. May 15 00:36:19.900912 systemd[1]: Running in initrd. May 15 00:36:19.900920 systemd[1]: No hostname configured, using default hostname. May 15 00:36:19.900928 systemd[1]: Hostname set to . May 15 00:36:19.900936 systemd[1]: Initializing machine ID from VM UUID. May 15 00:36:19.900944 systemd[1]: Queued start job for default target initrd.target. May 15 00:36:19.900953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:36:19.900961 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:36:19.900971 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 00:36:19.900980 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:36:19.900988 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 00:36:19.900996 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 00:36:19.901006 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 00:36:19.901015 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 00:36:19.901023 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:36:19.901033 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:36:19.901041 systemd[1]: Reached target paths.target - Path Units. May 15 00:36:19.901049 systemd[1]: Reached target slices.target - Slice Units. May 15 00:36:19.901057 systemd[1]: Reached target swap.target - Swaps. May 15 00:36:19.901065 systemd[1]: Reached target timers.target - Timer Units. May 15 00:36:19.901073 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:36:19.901082 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:36:19.901090 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 00:36:19.901098 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 00:36:19.901108 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:36:19.901116 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:36:19.901125 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:36:19.901133 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:36:19.901141 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 00:36:19.901149 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:36:19.901158 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 00:36:19.901166 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:36:19.901176 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:36:19.901184 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:36:19.901192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:36:19.901200 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 00:36:19.901209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:36:19.901217 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:36:19.901227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:36:19.901236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:36:19.901244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:36:19.901269 systemd-journald[237]: Collecting audit messages is disabled. May 15 00:36:19.901290 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:36:19.901298 kernel: Bridge firewalling registered May 15 00:36:19.901306 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:36:19.901315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:36:19.901323 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:36:19.901332 systemd-journald[237]: Journal started May 15 00:36:19.901353 systemd-journald[237]: Runtime Journal (/run/log/journal/7641893d8f8d4206998479902452432d) is 5.9M, max 47.3M, 41.4M free. May 15 00:36:19.874823 systemd-modules-load[238]: Inserted module 'overlay' May 15 00:36:19.903282 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:36:19.890628 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 00:36:19.905826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:36:19.909167 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:36:19.912708 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:36:19.916630 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:36:19.918876 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:36:19.920484 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 00:36:19.924686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:36:19.927829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:36:19.934145 dracut-cmdline[275]: dracut-dracut-053 May 15 00:36:19.936567 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3ad4d2a855aaa69496d8c2bf8d7e3c4212e29ec2df18e8282fb10689c3032596 May 15 00:36:19.958563 systemd-resolved[280]: Positive Trust Anchors: May 15 00:36:19.958583 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:36:19.958614 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:36:19.963236 systemd-resolved[280]: Defaulting to hostname 'linux'. May 15 00:36:19.965024 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:36:19.965906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:36:20.010683 kernel: SCSI subsystem initialized May 15 00:36:20.014679 kernel: Loading iSCSI transport class v2.0-870. May 15 00:36:20.024689 kernel: iscsi: registered transport (tcp) May 15 00:36:20.034970 kernel: iscsi: registered transport (qla4xxx) May 15 00:36:20.034990 kernel: QLogic iSCSI HBA Driver May 15 00:36:20.077768 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 00:36:20.091812 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 00:36:20.107817 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:36:20.107861 kernel: device-mapper: uevent: version 1.0.3 May 15 00:36:20.109034 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 00:36:20.155718 kernel: raid6: neonx8 gen() 15793 MB/s May 15 00:36:20.172685 kernel: raid6: neonx4 gen() 15657 MB/s May 15 00:36:20.189684 kernel: raid6: neonx2 gen() 13347 MB/s May 15 00:36:20.206686 kernel: raid6: neonx1 gen() 10486 MB/s May 15 00:36:20.223677 kernel: raid6: int64x8 gen() 6962 MB/s May 15 00:36:20.240684 kernel: raid6: int64x4 gen() 7354 MB/s May 15 00:36:20.257685 kernel: raid6: int64x2 gen() 6130 MB/s May 15 00:36:20.274676 kernel: raid6: int64x1 gen() 5062 MB/s May 15 00:36:20.274698 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s May 15 00:36:20.291693 kernel: raid6: .... xor() 11929 MB/s, rmw enabled May 15 00:36:20.291718 kernel: raid6: using neon recovery algorithm May 15 00:36:20.296735 kernel: xor: measuring software checksum speed May 15 00:36:20.296754 kernel: 8regs : 19807 MB/sec May 15 00:36:20.297796 kernel: 32regs : 19231 MB/sec May 15 00:36:20.297821 kernel: arm64_neon : 27043 MB/sec May 15 00:36:20.297840 kernel: xor: using function: arm64_neon (27043 MB/sec) May 15 00:36:20.349695 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 00:36:20.362697 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 00:36:20.374828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:36:20.386353 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 15 00:36:20.389536 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:36:20.393150 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 00:36:20.407217 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 15 00:36:20.434741 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:36:20.454823 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:36:20.494258 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:36:20.501833 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 00:36:20.513612 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 00:36:20.515421 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:36:20.519220 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:36:20.520367 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:36:20.530847 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 00:36:20.534966 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 00:36:20.535127 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:36:20.539857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:36:20.539892 kernel: GPT:9289727 != 19775487 May 15 00:36:20.539908 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:36:20.539920 kernel: GPT:9289727 != 19775487 May 15 00:36:20.540837 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:36:20.540863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:36:20.542199 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 00:36:20.549021 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:36:20.549130 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:36:20.553196 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:36:20.555354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:36:20.555492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:36:20.558256 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:36:20.563653 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (510) May 15 00:36:20.566703 kernel: BTRFS: device fsid c82d3215-8134-4516-8c53-9d29a8823a8c devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (529) May 15 00:36:20.567994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:36:20.579685 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:36:20.585598 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 00:36:20.589987 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 00:36:20.596796 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:36:20.600297 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 00:36:20.601214 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 00:36:20.610873 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 00:36:20.612371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 00:36:20.617230 disk-uuid[553]: Primary Header is updated. May 15 00:36:20.617230 disk-uuid[553]: Secondary Entries is updated. May 15 00:36:20.617230 disk-uuid[553]: Secondary Header is updated. May 15 00:36:20.624676 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:36:20.636816 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:36:20.637577 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:36:21.633575 disk-uuid[554]: The operation has completed successfully. May 15 00:36:21.634986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:36:21.656116 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:36:21.656213 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 00:36:21.676848 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 00:36:21.679761 sh[575]: Success May 15 00:36:21.694718 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 00:36:21.724133 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 00:36:21.735950 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 00:36:21.737558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 00:36:21.748297 kernel: BTRFS info (device dm-0): first mount of filesystem c82d3215-8134-4516-8c53-9d29a8823a8c May 15 00:36:21.748332 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 00:36:21.748351 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 00:36:21.748361 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 00:36:21.748865 kernel: BTRFS info (device dm-0): using free space tree May 15 00:36:21.752401 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 00:36:21.753760 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 00:36:21.765798 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 00:36:21.767293 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 00:36:21.777808 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:36:21.777846 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:36:21.777858 kernel: BTRFS info (device vda6): using free space tree May 15 00:36:21.781052 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:36:21.788222 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:36:21.789368 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:36:21.796153 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 00:36:21.802860 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 00:36:21.874488 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:36:21.884926 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:36:21.899452 ignition[670]: Ignition 2.19.0 May 15 00:36:21.899461 ignition[670]: Stage: fetch-offline May 15 00:36:21.899495 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 15 00:36:21.899503 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:36:21.899644 ignition[670]: parsed url from cmdline: "" May 15 00:36:21.899647 ignition[670]: no config URL provided May 15 00:36:21.899651 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:36:21.899658 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 15 00:36:21.899696 ignition[670]: op(1): [started] loading QEMU firmware config module May 15 00:36:21.899700 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:36:21.907386 ignition[670]: op(1): [finished] loading QEMU firmware config module May 15 00:36:21.910034 systemd-networkd[767]: lo: Link UP May 15 00:36:21.910048 systemd-networkd[767]: lo: Gained carrier May 15 00:36:21.910747 systemd-networkd[767]: Enumeration completed May 15 00:36:21.911184 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:36:21.911187 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:36:21.911891 systemd-networkd[767]: eth0: Link UP May 15 00:36:21.911894 systemd-networkd[767]: eth0: Gained carrier May 15 00:36:21.911901 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:36:21.911982 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:36:21.914066 systemd[1]: Reached target network.target - Network. May 15 00:36:21.936707 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.154/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:36:21.956505 ignition[670]: parsing config with SHA512: 46d000489ff9eeff439e06ec4990651818d79d2b4578d608b018de71eadb1294c725247708dcf96037bf8e0e20997107c113c498bc43b64cc8d32f6b9ff7d428 May 15 00:36:21.962106 unknown[670]: fetched base config from "system" May 15 00:36:21.962115 unknown[670]: fetched user config from "qemu" May 15 00:36:21.962691 ignition[670]: fetch-offline: fetch-offline passed May 15 00:36:21.964385 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:36:21.962768 ignition[670]: Ignition finished successfully May 15 00:36:21.965795 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:36:21.971906 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 00:36:21.982437 ignition[774]: Ignition 2.19.0 May 15 00:36:21.982448 ignition[774]: Stage: kargs May 15 00:36:21.982622 ignition[774]: no configs at "/usr/lib/ignition/base.d" May 15 00:36:21.982632 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:36:21.985819 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 00:36:21.983570 ignition[774]: kargs: kargs passed May 15 00:36:21.983618 ignition[774]: Ignition finished successfully May 15 00:36:21.998821 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 00:36:22.008254 ignition[783]: Ignition 2.19.0 May 15 00:36:22.008264 ignition[783]: Stage: disks May 15 00:36:22.008432 ignition[783]: no configs at "/usr/lib/ignition/base.d" May 15 00:36:22.008445 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:36:22.009382 ignition[783]: disks: disks passed May 15 00:36:22.011043 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 00:36:22.009427 ignition[783]: Ignition finished successfully May 15 00:36:22.013187 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 00:36:22.014495 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 00:36:22.016039 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:36:22.017592 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:36:22.019314 systemd[1]: Reached target basic.target - Basic System. May 15 00:36:22.029823 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 00:36:22.039134 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 00:36:22.042998 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 00:36:22.045313 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 00:36:22.089676 kernel: EXT4-fs (vda9): mounted filesystem 5a01cbd3-e7cb-4475-87b3-07e348161203 r/w with ordered data mode. Quota mode: none. May 15 00:36:22.090409 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 00:36:22.091729 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 00:36:22.106762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:36:22.108541 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 00:36:22.109699 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 00:36:22.109816 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:36:22.116292 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) May 15 00:36:22.109844 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:36:22.116100 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 00:36:22.122500 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:36:22.122526 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:36:22.122538 kernel: BTRFS info (device vda6): using free space tree May 15 00:36:22.122548 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:36:22.117867 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 00:36:22.124282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:36:22.161431 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:36:22.165862 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory May 15 00:36:22.169680 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:36:22.172589 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:36:22.243512 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 00:36:22.255791 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 00:36:22.257328 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 00:36:22.263673 kernel: BTRFS info (device vda6): last unmount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:36:22.278950 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 00:36:22.281776 ignition[914]: INFO : Ignition 2.19.0 May 15 00:36:22.281776 ignition[914]: INFO : Stage: mount May 15 00:36:22.283311 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:36:22.283311 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:36:22.283311 ignition[914]: INFO : mount: mount passed May 15 00:36:22.283311 ignition[914]: INFO : Ignition finished successfully May 15 00:36:22.285518 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 00:36:22.294806 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 00:36:22.746969 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 00:36:22.756823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 00:36:22.761675 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) May 15 00:36:22.763775 kernel: BTRFS info (device vda6): first mount of filesystem 472de571-4852-412e-83c6-4e5fddef810b May 15 00:36:22.763806 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 00:36:22.763827 kernel: BTRFS info (device vda6): using free space tree May 15 00:36:22.766682 kernel: BTRFS info (device vda6): auto enabling async discard May 15 00:36:22.767210 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 00:36:22.784746 ignition[946]: INFO : Ignition 2.19.0 May 15 00:36:22.784746 ignition[946]: INFO : Stage: files May 15 00:36:22.786373 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:36:22.786373 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:36:22.786373 ignition[946]: DEBUG : files: compiled without relabeling support, skipping May 15 00:36:22.789440 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:36:22.789440 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:36:22.789440 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:36:22.789440 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:36:22.789440 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:36:22.789152 unknown[946]: wrote ssh authorized keys file for user: core May 15 00:36:22.794976 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:36:22.794976 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 00:36:22.828508 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 00:36:22.927375 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 00:36:22.929391 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:36:22.929391 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 00:36:23.260918 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:36:23.371022 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:36:23.372901 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 15 00:36:23.394089 systemd-networkd[767]: eth0: Gained IPv6LL May 15 00:36:23.627056 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:36:24.201656 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 00:36:24.204190 ignition[946]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 00:36:24.204190 ignition[946]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:36:24.227180 ignition[946]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:36:24.230902 ignition[946]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:36:24.233387 ignition[946]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:36:24.233387 ignition[946]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 00:36:24.233387 ignition[946]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:36:24.233387 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:36:24.233387 ignition[946]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:36:24.233387 ignition[946]: INFO : files: files passed May 15 00:36:24.233387 ignition[946]: INFO : Ignition finished successfully May 15 00:36:24.234825 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 00:36:24.240855 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 00:36:24.244828 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 00:36:24.246739 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:36:24.246834 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 00:36:24.251880 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory May 15 00:36:24.253231 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:36:24.253231 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 00:36:24.257231 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:36:24.254688 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:36:24.256228 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 00:36:24.270166 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 00:36:24.289976 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:36:24.290755 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 00:36:24.292314 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 00:36:24.293612 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 00:36:24.295087 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 00:36:24.306820 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 00:36:24.318107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:36:24.320763 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 00:36:24.333230 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 00:36:24.334464 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:36:24.336577 systemd[1]: Stopped target timers.target - Timer Units. May 15 00:36:24.338357 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:36:24.338476 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 00:36:24.341005 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 00:36:24.342969 systemd[1]: Stopped target basic.target - Basic System. May 15 00:36:24.344582 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 00:36:24.346319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 00:36:24.348278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 00:36:24.350310 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 00:36:24.352133 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 00:36:24.354025 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 00:36:24.355929 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 00:36:24.357646 systemd[1]: Stopped target swap.target - Swaps. May 15 00:36:24.359174 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:36:24.359290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 00:36:24.361334 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 00:36:24.362475 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:36:24.364135 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 00:36:24.367738 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:36:24.370126 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:36:24.370243 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 00:36:24.372877 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:36:24.372997 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 00:36:24.375037 systemd[1]: Stopped target paths.target - Path Units. May 15 00:36:24.376642 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:36:24.381731 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:36:24.384286 systemd[1]: Stopped target slices.target - Slice Units. May 15 00:36:24.385259 systemd[1]: Stopped target sockets.target - Socket Units. May 15 00:36:24.386770 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:36:24.386868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 00:36:24.388414 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:36:24.388496 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 00:36:24.390011 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:36:24.390117 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 00:36:24.391859 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:36:24.391962 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 00:36:24.403822 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 00:36:24.405286 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 00:36:24.406262 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:36:24.406398 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:36:24.408257 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:36:24.408357 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 00:36:24.413498 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:36:24.413592 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 00:36:24.416488 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:36:24.420745 ignition[999]: INFO : Ignition 2.19.0 May 15 00:36:24.420745 ignition[999]: INFO : Stage: umount May 15 00:36:24.422133 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:36:24.422133 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:36:24.422133 ignition[999]: INFO : umount: umount passed May 15 00:36:24.422133 ignition[999]: INFO : Ignition finished successfully May 15 00:36:24.425467 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:36:24.425560 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 00:36:24.426730 systemd[1]: Stopped target network.target - Network. May 15 00:36:24.428352 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:36:24.428412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 00:36:24.429504 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:36:24.429552 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 00:36:24.431339 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:36:24.431383 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 00:36:24.432997 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 00:36:24.433043 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 00:36:24.435809 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 00:36:24.437492 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 00:36:24.439144 systemd-networkd[767]: eth0: DHCPv6 lease lost May 15 00:36:24.439468 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:36:24.439551 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 00:36:24.441069 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:36:24.441162 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 00:36:24.443556 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:36:24.443711 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 00:36:24.446897 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:36:24.446935 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 00:36:24.448507 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:36:24.448559 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 00:36:24.458752 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 00:36:24.459614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:36:24.459698 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 00:36:24.461835 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:36:24.461881 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:36:24.463721 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:36:24.463766 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 00:36:24.465543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 00:36:24.465587 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:36:24.467838 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:36:24.480877 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:36:24.481978 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 00:36:24.485363 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:36:24.485517 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:36:24.487648 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:36:24.487722 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 00:36:24.489472 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:36:24.489505 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:36:24.491314 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:36:24.491362 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 00:36:24.494089 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:36:24.494143 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 00:36:24.496809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:36:24.496853 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 00:36:24.508794 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 00:36:24.509807 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:36:24.509864 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:36:24.511914 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 00:36:24.511959 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:36:24.513885 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:36:24.513927 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:36:24.516044 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:36:24.516087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:36:24.518377 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:36:24.518453 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 00:36:24.522314 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 00:36:24.524489 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 00:36:24.535586 systemd[1]: Switching root. May 15 00:36:24.570609 systemd-journald[237]: Journal stopped May 15 00:36:25.274129 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 15 00:36:25.274185 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:36:25.274197 kernel: SELinux: policy capability open_perms=1 May 15 00:36:25.274207 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:36:25.274216 kernel: SELinux: policy capability always_check_network=0 May 15 00:36:25.274235 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:36:25.274245 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:36:25.274254 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:36:25.274264 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:36:25.274273 kernel: audit: type=1403 audit(1747269384.732:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:36:25.274284 systemd[1]: Successfully loaded SELinux policy in 35.451ms. May 15 00:36:25.274301 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.547ms. May 15 00:36:25.274312 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 00:36:25.274323 systemd[1]: Detected virtualization kvm. May 15 00:36:25.274335 systemd[1]: Detected architecture arm64. May 15 00:36:25.274346 systemd[1]: Detected first boot. May 15 00:36:25.274356 systemd[1]: Initializing machine ID from VM UUID. May 15 00:36:25.274366 zram_generator::config[1044]: No configuration found. May 15 00:36:25.274377 systemd[1]: Populated /etc with preset unit settings. May 15 00:36:25.274387 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 00:36:25.274398 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 00:36:25.274408 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 00:36:25.274422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 00:36:25.274433 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 00:36:25.274443 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 00:36:25.274454 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 00:36:25.274466 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 00:36:25.274481 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 00:36:25.274501 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 00:36:25.274513 systemd[1]: Created slice user.slice - User and Session Slice. May 15 00:36:25.274524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 00:36:25.274537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 00:36:25.274548 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 00:36:25.274559 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 00:36:25.274570 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 00:36:25.274581 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 00:36:25.274592 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 00:36:25.274604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 00:36:25.274615 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 00:36:25.274692 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 00:36:25.274712 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 00:36:25.274723 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 00:36:25.274734 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 00:36:25.274744 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 00:36:25.274755 systemd[1]: Reached target slices.target - Slice Units. May 15 00:36:25.274766 systemd[1]: Reached target swap.target - Swaps. May 15 00:36:25.274782 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 00:36:25.274796 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 00:36:25.274809 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 00:36:25.274819 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 00:36:25.274830 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 00:36:25.274840 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 00:36:25.274851 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 00:36:25.274862 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 00:36:25.274872 systemd[1]: Mounting media.mount - External Media Directory... May 15 00:36:25.274882 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 00:36:25.274892 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 00:36:25.274905 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 00:36:25.274916 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:36:25.274926 systemd[1]: Reached target machines.target - Containers. May 15 00:36:25.274936 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 00:36:25.274947 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:36:25.274958 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 00:36:25.274968 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 00:36:25.274979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:36:25.274991 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:36:25.275001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:36:25.275011 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 00:36:25.275023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:36:25.275033 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:36:25.275044 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 00:36:25.275054 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 00:36:25.275064 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 00:36:25.275076 systemd[1]: Stopped systemd-fsck-usr.service. May 15 00:36:25.275087 kernel: fuse: init (API version 7.39) May 15 00:36:25.275097 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 00:36:25.275107 kernel: loop: module loaded May 15 00:36:25.275117 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 00:36:25.275132 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 00:36:25.275142 kernel: ACPI: bus type drm_connector registered May 15 00:36:25.275152 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 00:36:25.275162 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 00:36:25.275173 systemd[1]: verity-setup.service: Deactivated successfully. May 15 00:36:25.275185 systemd[1]: Stopped verity-setup.service. May 15 00:36:25.275214 systemd-journald[1115]: Collecting audit messages is disabled. May 15 00:36:25.275239 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 00:36:25.275250 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 00:36:25.275263 systemd-journald[1115]: Journal started May 15 00:36:25.275286 systemd-journald[1115]: Runtime Journal (/run/log/journal/7641893d8f8d4206998479902452432d) is 5.9M, max 47.3M, 41.4M free. May 15 00:36:25.103300 systemd[1]: Queued start job for default target multi-user.target. May 15 00:36:25.118267 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 00:36:25.118627 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 00:36:25.278602 systemd[1]: Started systemd-journald.service - Journal Service. May 15 00:36:25.279203 systemd[1]: Mounted media.mount - External Media Directory. May 15 00:36:25.280433 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 00:36:25.281683 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 00:36:25.282846 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 00:36:25.285673 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 00:36:25.287034 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 00:36:25.288468 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:36:25.288614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 00:36:25.289756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:36:25.289910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:36:25.290980 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:36:25.291114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:36:25.292148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:36:25.292283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:36:25.293550 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:36:25.293699 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 00:36:25.294734 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:36:25.294879 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:36:25.295911 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 00:36:25.297169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 00:36:25.298304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 00:36:25.310389 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 00:36:25.315747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 00:36:25.317796 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 00:36:25.318865 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:36:25.318893 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 00:36:25.320500 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 00:36:25.322496 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 00:36:25.328878 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 00:36:25.330045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:36:25.331384 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 00:36:25.333323 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 00:36:25.334524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:36:25.339165 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 00:36:25.341218 systemd-journald[1115]: Time spent on flushing to /var/log/journal/7641893d8f8d4206998479902452432d is 24.734ms for 858 entries. May 15 00:36:25.341218 systemd-journald[1115]: System Journal (/var/log/journal/7641893d8f8d4206998479902452432d) is 8.0M, max 195.6M, 187.6M free. May 15 00:36:25.371982 systemd-journald[1115]: Received client request to flush runtime journal. May 15 00:36:25.342831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:36:25.343981 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:36:25.346951 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 00:36:25.348793 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 00:36:25.351551 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 00:36:25.352943 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 00:36:25.354004 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 00:36:25.356785 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 00:36:25.364973 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 00:36:25.373338 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 00:36:25.383325 kernel: loop0: detected capacity change from 0 to 194096 May 15 00:36:25.383987 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 00:36:25.388439 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 00:36:25.389964 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 00:36:25.391606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:36:25.395789 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:36:25.394958 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 15 00:36:25.394976 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. May 15 00:36:25.411325 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:36:25.418003 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 00:36:25.419786 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 00:36:25.423557 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 00:36:25.426587 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 00:36:25.438872 kernel: loop1: detected capacity change from 0 to 114328 May 15 00:36:25.455324 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 00:36:25.463875 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 00:36:25.475703 kernel: loop2: detected capacity change from 0 to 114432 May 15 00:36:25.478493 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 15 00:36:25.478843 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. May 15 00:36:25.483136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 00:36:25.520721 kernel: loop3: detected capacity change from 0 to 194096 May 15 00:36:25.529695 kernel: loop4: detected capacity change from 0 to 114328 May 15 00:36:25.535748 kernel: loop5: detected capacity change from 0 to 114432 May 15 00:36:25.541213 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 00:36:25.541633 (sd-merge)[1184]: Merged extensions into '/usr'. May 15 00:36:25.547236 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... May 15 00:36:25.547248 systemd[1]: Reloading... May 15 00:36:25.603712 zram_generator::config[1213]: No configuration found. May 15 00:36:25.658056 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:36:25.697820 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:36:25.733363 systemd[1]: Reloading finished in 185 ms. May 15 00:36:25.762803 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 00:36:25.764278 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 00:36:25.778033 systemd[1]: Starting ensure-sysext.service... May 15 00:36:25.779935 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 00:36:25.786300 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... May 15 00:36:25.786314 systemd[1]: Reloading... May 15 00:36:25.796353 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:36:25.796635 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 00:36:25.797325 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:36:25.797544 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 15 00:36:25.797591 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 15 00:36:25.799890 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:36:25.799901 systemd-tmpfiles[1245]: Skipping /boot May 15 00:36:25.807087 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 15 00:36:25.807103 systemd-tmpfiles[1245]: Skipping /boot May 15 00:36:25.816815 zram_generator::config[1270]: No configuration found. May 15 00:36:25.907910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:36:25.944120 systemd[1]: Reloading finished in 157 ms. May 15 00:36:25.964798 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 00:36:25.979116 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 00:36:25.986112 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:36:25.988375 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 00:36:25.990446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 00:36:25.995958 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 00:36:26.002240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 00:36:26.006966 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 00:36:26.011096 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:36:26.014951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:36:26.019049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:36:26.021103 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:36:26.023683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:36:26.025361 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 00:36:26.034676 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 00:36:26.036172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:36:26.036297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:36:26.037558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:36:26.037722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:36:26.039333 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:36:26.039459 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:36:26.047022 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:36:26.051536 systemd-udevd[1314]: Using default interface naming scheme 'v255'. May 15 00:36:26.058038 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:36:26.059989 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 00:36:26.062963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 00:36:26.063978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:36:26.066103 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 00:36:26.068066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:36:26.068210 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:36:26.076354 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 00:36:26.078130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:36:26.079691 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 00:36:26.081500 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:36:26.081618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 00:36:26.086715 systemd[1]: Finished ensure-sysext.service. May 15 00:36:26.087943 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 00:36:26.094315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 00:36:26.099849 augenrules[1349]: No rules May 15 00:36:26.101778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 00:36:26.103847 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 00:36:26.105342 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 00:36:26.105397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:36:26.107833 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 00:36:26.108935 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 00:36:26.112329 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:36:26.113599 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 00:36:26.115052 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 00:36:26.116598 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:36:26.116785 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 00:36:26.119167 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:36:26.119301 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 00:36:26.136567 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 00:36:26.137478 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 00:36:26.137526 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:36:26.162027 systemd-resolved[1312]: Positive Trust Anchors: May 15 00:36:26.162346 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:36:26.162383 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 00:36:26.164556 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 00:36:26.170353 systemd-resolved[1312]: Defaulting to hostname 'linux'. May 15 00:36:26.182718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 00:36:26.184345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 00:36:26.194992 systemd-networkd[1383]: lo: Link UP May 15 00:36:26.195000 systemd-networkd[1383]: lo: Gained carrier May 15 00:36:26.196203 systemd-networkd[1383]: Enumeration completed May 15 00:36:26.196345 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 00:36:26.197303 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 00:36:26.198246 systemd[1]: Reached target network.target - Network. May 15 00:36:26.198420 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:36:26.198424 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:36:26.198989 systemd-networkd[1383]: eth0: Link UP May 15 00:36:26.198992 systemd-networkd[1383]: eth0: Gained carrier May 15 00:36:26.199005 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:36:26.200027 systemd[1]: Reached target time-set.target - System Time Set. May 15 00:36:26.206848 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1366) May 15 00:36:26.207920 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 00:36:26.210008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 00:36:26.212884 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 00:36:26.217739 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.154/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:36:26.218805 systemd-timesyncd[1362]: Network configuration changed, trying to establish connection. May 15 00:36:26.220281 systemd-timesyncd[1362]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:36:26.220614 systemd-timesyncd[1362]: Initial clock synchronization to Thu 2025-05-15 00:36:26.405892 UTC. May 15 00:36:26.232779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 00:36:26.234463 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 00:36:26.265922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 00:36:26.276870 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 00:36:26.288815 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 00:36:26.301466 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:36:26.311496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 00:36:26.333711 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 00:36:26.335227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 00:36:26.336356 systemd[1]: Reached target sysinit.target - System Initialization. May 15 00:36:26.337462 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 00:36:26.339795 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 00:36:26.341212 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 00:36:26.342383 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 00:36:26.343536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 00:36:26.344574 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:36:26.344614 systemd[1]: Reached target paths.target - Path Units. May 15 00:36:26.345365 systemd[1]: Reached target timers.target - Timer Units. May 15 00:36:26.347058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 00:36:26.349271 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 00:36:26.360564 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 00:36:26.362528 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 00:36:26.364030 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 00:36:26.365174 systemd[1]: Reached target sockets.target - Socket Units. May 15 00:36:26.366117 systemd[1]: Reached target basic.target - Basic System. May 15 00:36:26.367051 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 00:36:26.367082 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 00:36:26.367949 systemd[1]: Starting containerd.service - containerd container runtime... May 15 00:36:26.369881 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 00:36:26.370781 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:36:26.371778 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 00:36:26.375873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 00:36:26.377004 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 00:36:26.379903 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 00:36:26.383864 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 00:36:26.385539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 00:36:26.387876 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 00:36:26.392842 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 00:36:26.393302 jq[1413]: false May 15 00:36:26.396028 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:36:26.396397 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 00:36:26.397084 systemd[1]: Starting update-engine.service - Update Engine... May 15 00:36:26.401976 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 00:36:26.404359 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 00:36:26.410372 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:36:26.410545 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 00:36:26.413351 jq[1425]: true May 15 00:36:26.414245 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:36:26.414399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 00:36:26.417281 extend-filesystems[1414]: Found loop3 May 15 00:36:26.418092 extend-filesystems[1414]: Found loop4 May 15 00:36:26.418092 extend-filesystems[1414]: Found loop5 May 15 00:36:26.418092 extend-filesystems[1414]: Found vda May 15 00:36:26.418092 extend-filesystems[1414]: Found vda1 May 15 00:36:26.418092 extend-filesystems[1414]: Found vda2 May 15 00:36:26.418092 extend-filesystems[1414]: Found vda3 May 15 00:36:26.418092 extend-filesystems[1414]: Found usr May 15 00:36:26.418092 extend-filesystems[1414]: Found vda4 May 15 00:36:26.418092 extend-filesystems[1414]: Found vda6 May 15 00:36:26.418092 extend-filesystems[1414]: Found vda7 May 15 00:36:26.418092 extend-filesystems[1414]: Found vda9 May 15 00:36:26.418092 extend-filesystems[1414]: Checking size of /dev/vda9 May 15 00:36:26.425374 dbus-daemon[1412]: [system] SELinux support is enabled May 15 00:36:26.433392 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 00:36:26.438914 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:36:26.439116 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 00:36:26.446476 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:36:26.446527 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 00:36:26.447981 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:36:26.448009 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 00:36:26.449736 update_engine[1423]: I20250515 00:36:26.449312 1423 main.cc:92] Flatcar Update Engine starting May 15 00:36:26.449947 jq[1441]: true May 15 00:36:26.451948 (ntainerd)[1442]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 00:36:26.456682 extend-filesystems[1414]: Resized partition /dev/vda9 May 15 00:36:26.460827 update_engine[1423]: I20250515 00:36:26.455206 1423 update_check_scheduler.cc:74] Next update check in 6m31s May 15 00:36:26.460865 tar[1432]: linux-arm64/helm May 15 00:36:26.457873 systemd[1]: Started update-engine.service - Update Engine. May 15 00:36:26.461280 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 00:36:26.462790 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) May 15 00:36:26.469689 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1376) May 15 00:36:26.469743 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:36:26.491464 systemd-logind[1419]: Watching system buttons on /dev/input/event0 (Power Button) May 15 00:36:26.505811 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:36:26.495638 systemd-logind[1419]: New seat seat0. May 15 00:36:26.497090 systemd[1]: Started systemd-logind.service - User Login Management. May 15 00:36:26.506212 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:36:26.506212 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:36:26.506212 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:36:26.514764 extend-filesystems[1414]: Resized filesystem in /dev/vda9 May 15 00:36:26.508252 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:36:26.508404 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 00:36:26.523833 bash[1466]: Updated "/home/core/.ssh/authorized_keys" May 15 00:36:26.530171 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 00:36:26.532056 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 00:36:26.544938 locksmithd[1451]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:36:26.661693 containerd[1442]: time="2025-05-15T00:36:26.661531720Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 15 00:36:26.689736 containerd[1442]: time="2025-05-15T00:36:26.689560320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.695213 containerd[1442]: time="2025-05-15T00:36:26.695138840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:36:26.695213 containerd[1442]: time="2025-05-15T00:36:26.695184760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:36:26.695213 containerd[1442]: time="2025-05-15T00:36:26.695204440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:36:26.695395 containerd[1442]: time="2025-05-15T00:36:26.695364920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 00:36:26.695395 containerd[1442]: time="2025-05-15T00:36:26.695390400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.695464 containerd[1442]: time="2025-05-15T00:36:26.695448120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:36:26.695464 containerd[1442]: time="2025-05-15T00:36:26.695460600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.695650 containerd[1442]: time="2025-05-15T00:36:26.695618080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:36:26.695650 containerd[1442]: time="2025-05-15T00:36:26.695642360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.695721 containerd[1442]: time="2025-05-15T00:36:26.695656440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:36:26.695721 containerd[1442]: time="2025-05-15T00:36:26.695689880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.695805 containerd[1442]: time="2025-05-15T00:36:26.695785800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.696013 containerd[1442]: time="2025-05-15T00:36:26.695985480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:36:26.696114 containerd[1442]: time="2025-05-15T00:36:26.696095280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:36:26.696140 containerd[1442]: time="2025-05-15T00:36:26.696113680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:36:26.696229 containerd[1442]: time="2025-05-15T00:36:26.696187480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:36:26.696277 containerd[1442]: time="2025-05-15T00:36:26.696263760Z" level=info msg="metadata content store policy set" policy=shared May 15 00:36:26.702649 containerd[1442]: time="2025-05-15T00:36:26.702610960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:36:26.702712 containerd[1442]: time="2025-05-15T00:36:26.702690840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:36:26.702734 containerd[1442]: time="2025-05-15T00:36:26.702714200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 00:36:26.702753 containerd[1442]: time="2025-05-15T00:36:26.702732400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 00:36:26.702753 containerd[1442]: time="2025-05-15T00:36:26.702748520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:36:26.702920 containerd[1442]: time="2025-05-15T00:36:26.702899960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:36:26.703175 containerd[1442]: time="2025-05-15T00:36:26.703147000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:36:26.703280 containerd[1442]: time="2025-05-15T00:36:26.703260920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 00:36:26.703307 containerd[1442]: time="2025-05-15T00:36:26.703285080Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 00:36:26.703307 containerd[1442]: time="2025-05-15T00:36:26.703300720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 00:36:26.703341 containerd[1442]: time="2025-05-15T00:36:26.703316800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703341 containerd[1442]: time="2025-05-15T00:36:26.703330600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703377 containerd[1442]: time="2025-05-15T00:36:26.703342800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703377 containerd[1442]: time="2025-05-15T00:36:26.703356760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703377 containerd[1442]: time="2025-05-15T00:36:26.703370760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703426 containerd[1442]: time="2025-05-15T00:36:26.703382960Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703426 containerd[1442]: time="2025-05-15T00:36:26.703394880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703426 containerd[1442]: time="2025-05-15T00:36:26.703406600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:36:26.703474 containerd[1442]: time="2025-05-15T00:36:26.703426480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703474 containerd[1442]: time="2025-05-15T00:36:26.703442360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703474 containerd[1442]: time="2025-05-15T00:36:26.703454840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703474 containerd[1442]: time="2025-05-15T00:36:26.703467480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703547 containerd[1442]: time="2025-05-15T00:36:26.703480080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703547 containerd[1442]: time="2025-05-15T00:36:26.703493640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703547 containerd[1442]: time="2025-05-15T00:36:26.703505800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703547 containerd[1442]: time="2025-05-15T00:36:26.703519680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703613 containerd[1442]: time="2025-05-15T00:36:26.703556200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703613 containerd[1442]: time="2025-05-15T00:36:26.703572840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703613 containerd[1442]: time="2025-05-15T00:36:26.703584600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703613 containerd[1442]: time="2025-05-15T00:36:26.703596520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703703 containerd[1442]: time="2025-05-15T00:36:26.703612920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703703 containerd[1442]: time="2025-05-15T00:36:26.703630720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 00:36:26.703703 containerd[1442]: time="2025-05-15T00:36:26.703651840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703703 containerd[1442]: time="2025-05-15T00:36:26.703686640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:36:26.703703 containerd[1442]: time="2025-05-15T00:36:26.703699160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:36:26.704795 containerd[1442]: time="2025-05-15T00:36:26.704744920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:36:26.705049 containerd[1442]: time="2025-05-15T00:36:26.705013800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 00:36:26.705049 containerd[1442]: time="2025-05-15T00:36:26.705039680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:36:26.705097 containerd[1442]: time="2025-05-15T00:36:26.705055680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 00:36:26.705097 containerd[1442]: time="2025-05-15T00:36:26.705066640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:36:26.705097 containerd[1442]: time="2025-05-15T00:36:26.705084600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 00:36:26.705158 containerd[1442]: time="2025-05-15T00:36:26.705143840Z" level=info msg="NRI interface is disabled by configuration." May 15 00:36:26.705194 containerd[1442]: time="2025-05-15T00:36:26.705166400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:36:26.705685 containerd[1442]: time="2025-05-15T00:36:26.705608840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:36:26.705801 containerd[1442]: time="2025-05-15T00:36:26.705695000Z" level=info msg="Connect containerd service" May 15 00:36:26.705801 containerd[1442]: time="2025-05-15T00:36:26.705727560Z" level=info msg="using legacy CRI server" May 15 00:36:26.705801 containerd[1442]: time="2025-05-15T00:36:26.705735120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 00:36:26.706105 containerd[1442]: time="2025-05-15T00:36:26.706066920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:36:26.707326 containerd[1442]: time="2025-05-15T00:36:26.707286160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:36:26.708011 containerd[1442]: time="2025-05-15T00:36:26.707576680Z" level=info msg="Start subscribing containerd event" May 15 00:36:26.708011 containerd[1442]: time="2025-05-15T00:36:26.707631720Z" level=info msg="Start recovering state" May 15 00:36:26.708011 containerd[1442]: time="2025-05-15T00:36:26.707710280Z" level=info msg="Start event monitor" May 15 00:36:26.708011 containerd[1442]: time="2025-05-15T00:36:26.707729080Z" level=info msg="Start snapshots syncer" May 15 00:36:26.708011 containerd[1442]: time="2025-05-15T00:36:26.707738680Z" level=info msg="Start cni network conf syncer for default" May 15 00:36:26.708011 containerd[1442]: time="2025-05-15T00:36:26.707747280Z" level=info msg="Start streaming server" May 15 00:36:26.708157 containerd[1442]: time="2025-05-15T00:36:26.708105680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:36:26.708202 containerd[1442]: time="2025-05-15T00:36:26.708180000Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:36:26.708314 systemd[1]: Started containerd.service - containerd container runtime. May 15 00:36:26.708393 containerd[1442]: time="2025-05-15T00:36:26.708353640Z" level=info msg="containerd successfully booted in 0.047640s" May 15 00:36:26.814586 tar[1432]: linux-arm64/LICENSE May 15 00:36:26.814724 tar[1432]: linux-arm64/README.md May 15 00:36:26.832708 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 00:36:27.426256 systemd-networkd[1383]: eth0: Gained IPv6LL May 15 00:36:27.428887 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 00:36:27.430615 systemd[1]: Reached target network-online.target - Network is Online. May 15 00:36:27.440251 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 00:36:27.442725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:27.444965 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 00:36:27.463371 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 00:36:27.468305 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 00:36:27.468511 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 00:36:27.469898 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 00:36:27.930171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:27.933988 (kubelet)[1509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:36:28.382698 kubelet[1509]: E0515 00:36:28.382644 1509 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:36:28.385379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:36:28.385527 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:36:29.177952 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:36:29.197323 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 00:36:29.209952 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 00:36:29.215393 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:36:29.216712 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 00:36:29.219291 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 00:36:29.231704 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 00:36:29.234304 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 00:36:29.236557 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 00:36:29.238036 systemd[1]: Reached target getty.target - Login Prompts. May 15 00:36:29.238918 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 00:36:29.242798 systemd[1]: Startup finished in 531ms (kernel) + 5.019s (initrd) + 4.554s (userspace) = 10.105s. May 15 00:36:32.254649 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 00:36:32.255861 systemd[1]: Started sshd@0-10.0.0.154:22-10.0.0.1:54762.service - OpenSSH per-connection server daemon (10.0.0.1:54762). May 15 00:36:32.328529 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 54762 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:32.330780 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:32.341146 systemd-logind[1419]: New session 1 of user core. May 15 00:36:32.342192 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 00:36:32.359995 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 00:36:32.369901 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 00:36:32.373429 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 00:36:32.385167 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:36:32.467278 systemd[1543]: Queued start job for default target default.target. May 15 00:36:32.476666 systemd[1543]: Created slice app.slice - User Application Slice. May 15 00:36:32.476714 systemd[1543]: Reached target paths.target - Paths. May 15 00:36:32.476726 systemd[1543]: Reached target timers.target - Timers. May 15 00:36:32.477980 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 00:36:32.488374 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 00:36:32.488439 systemd[1543]: Reached target sockets.target - Sockets. May 15 00:36:32.488460 systemd[1543]: Reached target basic.target - Basic System. May 15 00:36:32.488498 systemd[1543]: Reached target default.target - Main User Target. May 15 00:36:32.488526 systemd[1543]: Startup finished in 97ms. May 15 00:36:32.488886 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 00:36:32.490724 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 00:36:32.552198 systemd[1]: Started sshd@1-10.0.0.154:22-10.0.0.1:60298.service - OpenSSH per-connection server daemon (10.0.0.1:60298). May 15 00:36:32.586645 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 60298 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:32.587950 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:32.591738 systemd-logind[1419]: New session 2 of user core. May 15 00:36:32.600821 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 00:36:32.652964 sshd[1554]: pam_unix(sshd:session): session closed for user core May 15 00:36:32.668077 systemd[1]: sshd@1-10.0.0.154:22-10.0.0.1:60298.service: Deactivated successfully. May 15 00:36:32.669399 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:36:32.670984 systemd-logind[1419]: Session 2 logged out. Waiting for processes to exit. May 15 00:36:32.671914 systemd[1]: Started sshd@2-10.0.0.154:22-10.0.0.1:60302.service - OpenSSH per-connection server daemon (10.0.0.1:60302). May 15 00:36:32.672904 systemd-logind[1419]: Removed session 2. May 15 00:36:32.706530 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 60302 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:32.707801 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:32.712381 systemd-logind[1419]: New session 3 of user core. May 15 00:36:32.722807 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 00:36:32.770703 sshd[1561]: pam_unix(sshd:session): session closed for user core May 15 00:36:32.778971 systemd[1]: sshd@2-10.0.0.154:22-10.0.0.1:60302.service: Deactivated successfully. May 15 00:36:32.780357 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:36:32.782874 systemd-logind[1419]: Session 3 logged out. Waiting for processes to exit. May 15 00:36:32.795979 systemd[1]: Started sshd@3-10.0.0.154:22-10.0.0.1:60316.service - OpenSSH per-connection server daemon (10.0.0.1:60316). May 15 00:36:32.797026 systemd-logind[1419]: Removed session 3. May 15 00:36:32.826538 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 60316 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:32.827814 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:32.831730 systemd-logind[1419]: New session 4 of user core. May 15 00:36:32.846848 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 00:36:32.899171 sshd[1568]: pam_unix(sshd:session): session closed for user core May 15 00:36:32.910027 systemd[1]: sshd@3-10.0.0.154:22-10.0.0.1:60316.service: Deactivated successfully. May 15 00:36:32.911425 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:36:32.912608 systemd-logind[1419]: Session 4 logged out. Waiting for processes to exit. May 15 00:36:32.913709 systemd[1]: Started sshd@4-10.0.0.154:22-10.0.0.1:60324.service - OpenSSH per-connection server daemon (10.0.0.1:60324). May 15 00:36:32.914867 systemd-logind[1419]: Removed session 4. May 15 00:36:32.951608 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 60324 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:32.952997 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:32.957891 systemd-logind[1419]: New session 5 of user core. May 15 00:36:32.966941 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 00:36:33.037245 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:36:33.037556 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:36:33.050544 sudo[1578]: pam_unix(sudo:session): session closed for user root May 15 00:36:33.052468 sshd[1575]: pam_unix(sshd:session): session closed for user core May 15 00:36:33.065199 systemd[1]: sshd@4-10.0.0.154:22-10.0.0.1:60324.service: Deactivated successfully. May 15 00:36:33.066653 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:36:33.068806 systemd-logind[1419]: Session 5 logged out. Waiting for processes to exit. May 15 00:36:33.069557 systemd[1]: Started sshd@5-10.0.0.154:22-10.0.0.1:60336.service - OpenSSH per-connection server daemon (10.0.0.1:60336). May 15 00:36:33.070297 systemd-logind[1419]: Removed session 5. May 15 00:36:33.107612 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 60336 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:33.108942 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:33.112754 systemd-logind[1419]: New session 6 of user core. May 15 00:36:33.120842 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 00:36:33.172288 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:36:33.172586 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:36:33.175654 sudo[1587]: pam_unix(sudo:session): session closed for user root May 15 00:36:33.180318 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 00:36:33.180604 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:36:33.197951 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 15 00:36:33.199353 auditctl[1590]: No rules May 15 00:36:33.200240 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:36:33.200457 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 15 00:36:33.202234 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 15 00:36:33.226661 augenrules[1608]: No rules May 15 00:36:33.228033 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 15 00:36:33.229106 sudo[1586]: pam_unix(sudo:session): session closed for user root May 15 00:36:33.230837 sshd[1583]: pam_unix(sshd:session): session closed for user core May 15 00:36:33.241087 systemd[1]: sshd@5-10.0.0.154:22-10.0.0.1:60336.service: Deactivated successfully. May 15 00:36:33.242534 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:36:33.243878 systemd-logind[1419]: Session 6 logged out. Waiting for processes to exit. May 15 00:36:33.254931 systemd[1]: Started sshd@6-10.0.0.154:22-10.0.0.1:60344.service - OpenSSH per-connection server daemon (10.0.0.1:60344). May 15 00:36:33.255781 systemd-logind[1419]: Removed session 6. May 15 00:36:33.287805 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 60344 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:36:33.289114 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:36:33.293246 systemd-logind[1419]: New session 7 of user core. May 15 00:36:33.300882 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 00:36:33.352028 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:36:33.352345 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 00:36:33.681897 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 00:36:33.682044 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 00:36:33.942089 dockerd[1637]: time="2025-05-15T00:36:33.941966345Z" level=info msg="Starting up" May 15 00:36:34.086203 dockerd[1637]: time="2025-05-15T00:36:34.086151218Z" level=info msg="Loading containers: start." May 15 00:36:34.171699 kernel: Initializing XFRM netlink socket May 15 00:36:34.238527 systemd-networkd[1383]: docker0: Link UP May 15 00:36:34.261972 dockerd[1637]: time="2025-05-15T00:36:34.261907958Z" level=info msg="Loading containers: done." May 15 00:36:34.276352 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2487098629-merged.mount: Deactivated successfully. May 15 00:36:34.278137 dockerd[1637]: time="2025-05-15T00:36:34.278083891Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:36:34.278232 dockerd[1637]: time="2025-05-15T00:36:34.278185489Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 May 15 00:36:34.278307 dockerd[1637]: time="2025-05-15T00:36:34.278283777Z" level=info msg="Daemon has completed initialization" May 15 00:36:34.306864 dockerd[1637]: time="2025-05-15T00:36:34.306721433Z" level=info msg="API listen on /run/docker.sock" May 15 00:36:34.307051 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 00:36:35.003349 containerd[1442]: time="2025-05-15T00:36:35.003300342Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 00:36:35.732913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593874763.mount: Deactivated successfully. May 15 00:36:36.673385 containerd[1442]: time="2025-05-15T00:36:36.673333405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:36.674082 containerd[1442]: time="2025-05-15T00:36:36.674045381Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 15 00:36:36.675047 containerd[1442]: time="2025-05-15T00:36:36.674981028Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:36.677816 containerd[1442]: time="2025-05-15T00:36:36.677779753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:36.679015 containerd[1442]: time="2025-05-15T00:36:36.678968434Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.675619962s" May 15 00:36:36.679015 containerd[1442]: time="2025-05-15T00:36:36.679005571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 15 00:36:36.697963 containerd[1442]: time="2025-05-15T00:36:36.697930623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 00:36:38.025941 containerd[1442]: time="2025-05-15T00:36:38.025891547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:38.026441 containerd[1442]: time="2025-05-15T00:36:38.026403952Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 15 00:36:38.027287 containerd[1442]: time="2025-05-15T00:36:38.027255881Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:38.031321 containerd[1442]: time="2025-05-15T00:36:38.031257678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:38.032499 containerd[1442]: time="2025-05-15T00:36:38.032441211Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.334472822s" May 15 00:36:38.032499 containerd[1442]: time="2025-05-15T00:36:38.032478047Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 15 00:36:38.052545 containerd[1442]: time="2025-05-15T00:36:38.052508467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 00:36:38.491954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:36:38.501853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:38.594073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:38.597731 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 00:36:38.640426 kubelet[1872]: E0515 00:36:38.640348 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:36:38.643448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:36:38.643603 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:36:38.998328 containerd[1442]: time="2025-05-15T00:36:38.998277901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:38.999065 containerd[1442]: time="2025-05-15T00:36:38.999013170Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 15 00:36:38.999623 containerd[1442]: time="2025-05-15T00:36:38.999596391Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:39.003730 containerd[1442]: time="2025-05-15T00:36:39.003694262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:39.004520 containerd[1442]: time="2025-05-15T00:36:39.004480723Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 951.929235ms" May 15 00:36:39.004555 containerd[1442]: time="2025-05-15T00:36:39.004516610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 15 00:36:39.022791 containerd[1442]: time="2025-05-15T00:36:39.022484457Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 00:36:39.979718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486041997.mount: Deactivated successfully. May 15 00:36:40.179730 containerd[1442]: time="2025-05-15T00:36:40.179678661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:40.180639 containerd[1442]: time="2025-05-15T00:36:40.180460929Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 15 00:36:40.181297 containerd[1442]: time="2025-05-15T00:36:40.181271513Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:40.183862 containerd[1442]: time="2025-05-15T00:36:40.183813474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:40.184635 containerd[1442]: time="2025-05-15T00:36:40.184549514Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.162013743s" May 15 00:36:40.184635 containerd[1442]: time="2025-05-15T00:36:40.184589316Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 15 00:36:40.203303 containerd[1442]: time="2025-05-15T00:36:40.203256962Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:36:40.801316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934810016.mount: Deactivated successfully. May 15 00:36:41.525195 containerd[1442]: time="2025-05-15T00:36:41.525056571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:41.526043 containerd[1442]: time="2025-05-15T00:36:41.525809056Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 00:36:41.526703 containerd[1442]: time="2025-05-15T00:36:41.526676712Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:41.529819 containerd[1442]: time="2025-05-15T00:36:41.529786769Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:41.531973 containerd[1442]: time="2025-05-15T00:36:41.531925480Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.328627717s" May 15 00:36:41.531973 containerd[1442]: time="2025-05-15T00:36:41.531965423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 00:36:41.549845 containerd[1442]: time="2025-05-15T00:36:41.549811540Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 00:36:41.967535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180148062.mount: Deactivated successfully. May 15 00:36:41.971589 containerd[1442]: time="2025-05-15T00:36:41.971540780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:41.971938 containerd[1442]: time="2025-05-15T00:36:41.971902350Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 15 00:36:41.972778 containerd[1442]: time="2025-05-15T00:36:41.972730625Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:41.974911 containerd[1442]: time="2025-05-15T00:36:41.974862151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:41.975752 containerd[1442]: time="2025-05-15T00:36:41.975720213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 425.87455ms" May 15 00:36:41.975814 containerd[1442]: time="2025-05-15T00:36:41.975762243Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 15 00:36:41.994726 containerd[1442]: time="2025-05-15T00:36:41.994517203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 00:36:42.485952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339242462.mount: Deactivated successfully. May 15 00:36:44.015278 containerd[1442]: time="2025-05-15T00:36:44.015212404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:44.015834 containerd[1442]: time="2025-05-15T00:36:44.015804180Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 15 00:36:44.016762 containerd[1442]: time="2025-05-15T00:36:44.016728312Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:44.019741 containerd[1442]: time="2025-05-15T00:36:44.019707562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:36:44.021072 containerd[1442]: time="2025-05-15T00:36:44.021035700Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.026481255s" May 15 00:36:44.021101 containerd[1442]: time="2025-05-15T00:36:44.021072348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 15 00:36:48.615618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:48.622995 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:48.640648 systemd[1]: Reloading requested from client PID 2093 ('systemctl') (unit session-7.scope)... May 15 00:36:48.640673 systemd[1]: Reloading... May 15 00:36:48.693814 zram_generator::config[2132]: No configuration found. May 15 00:36:48.824061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:36:48.878299 systemd[1]: Reloading finished in 237 ms. May 15 00:36:48.923345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:48.925641 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:48.926862 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:36:48.927081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:48.928471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:49.026610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:49.030410 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:36:49.066321 kubelet[2179]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:36:49.066321 kubelet[2179]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:36:49.066321 kubelet[2179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:36:49.224985 kubelet[2179]: I0515 00:36:49.224727 2179 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:36:49.942881 kubelet[2179]: I0515 00:36:49.942838 2179 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:36:49.942881 kubelet[2179]: I0515 00:36:49.942870 2179 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:36:49.943101 kubelet[2179]: I0515 00:36:49.943075 2179 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:36:49.993966 kubelet[2179]: E0515 00:36:49.993636 2179 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.154:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:49.993966 kubelet[2179]: I0515 00:36:49.993828 2179 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:36:50.004832 kubelet[2179]: I0515 00:36:50.004805 2179 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:36:50.005330 kubelet[2179]: I0515 00:36:50.005299 2179 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:36:50.005599 kubelet[2179]: I0515 00:36:50.005423 2179 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:36:50.005821 kubelet[2179]: I0515 00:36:50.005805 2179 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:36:50.005876 kubelet[2179]: I0515 00:36:50.005867 2179 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:36:50.006186 kubelet[2179]: I0515 00:36:50.006171 2179 state_mem.go:36] "Initialized new in-memory state store" May 15 00:36:50.007172 kubelet[2179]: I0515 00:36:50.007150 2179 kubelet.go:400] "Attempting to sync node with API server" May 15 00:36:50.007267 kubelet[2179]: I0515 00:36:50.007256 2179 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:36:50.007449 kubelet[2179]: I0515 00:36:50.007440 2179 kubelet.go:312] "Adding apiserver pod source" May 15 00:36:50.008181 kubelet[2179]: I0515 00:36:50.007582 2179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:36:50.008181 kubelet[2179]: W0515 00:36:50.007847 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.008181 kubelet[2179]: E0515 00:36:50.007892 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.008181 kubelet[2179]: W0515 00:36:50.008105 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.008181 kubelet[2179]: E0515 00:36:50.008137 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.008765 kubelet[2179]: I0515 00:36:50.008642 2179 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:36:50.009172 kubelet[2179]: I0515 00:36:50.009131 2179 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:36:50.009509 kubelet[2179]: W0515 00:36:50.009319 2179 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:36:50.010257 kubelet[2179]: I0515 00:36:50.010238 2179 server.go:1264] "Started kubelet" May 15 00:36:50.013409 kubelet[2179]: I0515 00:36:50.013362 2179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:36:50.015131 kubelet[2179]: I0515 00:36:50.014771 2179 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:36:50.015977 kubelet[2179]: I0515 00:36:50.015943 2179 server.go:455] "Adding debug handlers to kubelet server" May 15 00:36:50.016357 kubelet[2179]: E0515 00:36:50.016319 2179 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:36:50.016814 kubelet[2179]: I0515 00:36:50.016790 2179 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:36:50.016814 kubelet[2179]: I0515 00:36:50.016785 2179 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:36:50.016924 kubelet[2179]: I0515 00:36:50.016907 2179 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:36:50.016988 kubelet[2179]: I0515 00:36:50.016966 2179 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:36:50.017686 kubelet[2179]: E0515 00:36:50.017355 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.154:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.154:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8c455e90c452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:36:50.010219602 +0000 UTC m=+0.976735996,LastTimestamp:2025-05-15 00:36:50.010219602 +0000 UTC m=+0.976735996,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:36:50.017878 kubelet[2179]: W0515 00:36:50.017834 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.017878 kubelet[2179]: E0515 00:36:50.017879 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.018407 kubelet[2179]: E0515 00:36:50.018197 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="200ms" May 15 00:36:50.018478 kubelet[2179]: I0515 00:36:50.018454 2179 reconciler.go:26] "Reconciler: start to sync state" May 15 00:36:50.018618 kubelet[2179]: I0515 00:36:50.018525 2179 factory.go:221] Registration of the systemd container factory successfully May 15 00:36:50.018618 kubelet[2179]: I0515 00:36:50.018608 2179 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:36:50.019526 kubelet[2179]: I0515 00:36:50.019506 2179 factory.go:221] Registration of the containerd container factory successfully May 15 00:36:50.030118 kubelet[2179]: I0515 00:36:50.030058 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:36:50.031808 kubelet[2179]: I0515 00:36:50.031780 2179 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:36:50.031944 kubelet[2179]: I0515 00:36:50.031932 2179 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:36:50.031980 kubelet[2179]: I0515 00:36:50.031955 2179 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:36:50.032133 kubelet[2179]: E0515 00:36:50.031998 2179 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:36:50.032654 kubelet[2179]: W0515 00:36:50.032504 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.032814 kubelet[2179]: E0515 00:36:50.032697 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.035230 kubelet[2179]: I0515 00:36:50.035210 2179 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:36:50.035230 kubelet[2179]: I0515 00:36:50.035227 2179 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:36:50.035335 kubelet[2179]: I0515 00:36:50.035244 2179 state_mem.go:36] "Initialized new in-memory state store" May 15 00:36:50.037528 kubelet[2179]: I0515 00:36:50.037502 2179 policy_none.go:49] "None policy: Start" May 15 00:36:50.038360 kubelet[2179]: I0515 00:36:50.038054 2179 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:36:50.038360 kubelet[2179]: I0515 00:36:50.038079 2179 state_mem.go:35] "Initializing new in-memory state store" May 15 00:36:50.043191 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 00:36:50.057558 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 00:36:50.060314 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 00:36:50.070843 kubelet[2179]: I0515 00:36:50.070321 2179 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:36:50.070843 kubelet[2179]: I0515 00:36:50.070511 2179 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:36:50.070843 kubelet[2179]: I0515 00:36:50.070617 2179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:36:50.072263 kubelet[2179]: E0515 00:36:50.072243 2179 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:36:50.118584 kubelet[2179]: I0515 00:36:50.118546 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:36:50.119085 kubelet[2179]: E0515 00:36:50.119061 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" May 15 00:36:50.132178 kubelet[2179]: I0515 00:36:50.132101 2179 topology_manager.go:215] "Topology Admit Handler" podUID="b4bf5f2b890f1aab3c63d83134a91619" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 00:36:50.133013 kubelet[2179]: I0515 00:36:50.132986 2179 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 00:36:50.133944 kubelet[2179]: I0515 00:36:50.133906 2179 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 00:36:50.139378 systemd[1]: Created slice kubepods-burstable-podb4bf5f2b890f1aab3c63d83134a91619.slice - libcontainer container kubepods-burstable-podb4bf5f2b890f1aab3c63d83134a91619.slice. May 15 00:36:50.167950 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 15 00:36:50.181991 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 15 00:36:50.218967 kubelet[2179]: I0515 00:36:50.218897 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf5f2b890f1aab3c63d83134a91619-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4bf5f2b890f1aab3c63d83134a91619\") " pod="kube-system/kube-apiserver-localhost" May 15 00:36:50.218967 kubelet[2179]: I0515 00:36:50.218929 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf5f2b890f1aab3c63d83134a91619-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4bf5f2b890f1aab3c63d83134a91619\") " pod="kube-system/kube-apiserver-localhost" May 15 00:36:50.218967 kubelet[2179]: I0515 00:36:50.218952 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:50.218967 kubelet[2179]: I0515 00:36:50.218990 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 00:36:50.218967 kubelet[2179]: I0515 00:36:50.219009 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4bf5f2b890f1aab3c63d83134a91619-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4bf5f2b890f1aab3c63d83134a91619\") " pod="kube-system/kube-apiserver-localhost" May 15 00:36:50.219253 kubelet[2179]: I0515 00:36:50.219028 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:50.219253 kubelet[2179]: I0515 00:36:50.219049 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:50.219253 kubelet[2179]: E0515 00:36:50.219064 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="400ms" May 15 00:36:50.219253 kubelet[2179]: I0515 00:36:50.219081 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:50.219253 kubelet[2179]: I0515 00:36:50.219120 2179 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:50.320231 kubelet[2179]: I0515 00:36:50.320194 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:36:50.320477 kubelet[2179]: E0515 00:36:50.320453 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" May 15 00:36:50.467310 kubelet[2179]: E0515 00:36:50.467270 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:50.467967 containerd[1442]: time="2025-05-15T00:36:50.467926657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4bf5f2b890f1aab3c63d83134a91619,Namespace:kube-system,Attempt:0,}" May 15 00:36:50.481249 kubelet[2179]: E0515 00:36:50.481156 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:50.481566 containerd[1442]: time="2025-05-15T00:36:50.481515111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 00:36:50.483902 kubelet[2179]: E0515 00:36:50.483868 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:50.484226 containerd[1442]: time="2025-05-15T00:36:50.484191229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 00:36:50.619700 kubelet[2179]: E0515 00:36:50.619646 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="800ms" May 15 00:36:50.722148 kubelet[2179]: I0515 00:36:50.722087 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:36:50.722473 kubelet[2179]: E0515 00:36:50.722432 2179 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.154:6443/api/v1/nodes\": dial tcp 10.0.0.154:6443: connect: connection refused" node="localhost" May 15 00:36:50.827363 kubelet[2179]: W0515 00:36:50.827295 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:50.827363 kubelet[2179]: E0515 00:36:50.827357 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.154:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.000990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465511690.mount: Deactivated successfully. May 15 00:36:51.006069 containerd[1442]: time="2025-05-15T00:36:51.006017525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:36:51.006906 containerd[1442]: time="2025-05-15T00:36:51.006876454Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:36:51.007519 containerd[1442]: time="2025-05-15T00:36:51.007482264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:36:51.008196 containerd[1442]: time="2025-05-15T00:36:51.008160062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 00:36:51.008733 containerd[1442]: time="2025-05-15T00:36:51.008702653Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:36:51.009676 containerd[1442]: time="2025-05-15T00:36:51.009639574Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 00:36:51.010271 containerd[1442]: time="2025-05-15T00:36:51.010235695Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:36:51.014432 containerd[1442]: time="2025-05-15T00:36:51.014396131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 00:36:51.015382 containerd[1442]: time="2025-05-15T00:36:51.015343583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 533.76156ms" May 15 00:36:51.017133 containerd[1442]: time="2025-05-15T00:36:51.016878587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.624252ms" May 15 00:36:51.019838 containerd[1442]: time="2025-05-15T00:36:51.019555547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.542318ms" May 15 00:36:51.168338 containerd[1442]: time="2025-05-15T00:36:51.168026484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:36:51.168338 containerd[1442]: time="2025-05-15T00:36:51.168048024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:36:51.168338 containerd[1442]: time="2025-05-15T00:36:51.168248372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:36:51.168338 containerd[1442]: time="2025-05-15T00:36:51.168269713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:36:51.168540 containerd[1442]: time="2025-05-15T00:36:51.168381698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:36:51.168540 containerd[1442]: time="2025-05-15T00:36:51.168353952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:36:51.168672 containerd[1442]: time="2025-05-15T00:36:51.168567633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:36:51.169489 containerd[1442]: time="2025-05-15T00:36:51.168676936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:36:51.172618 containerd[1442]: time="2025-05-15T00:36:51.171956663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:36:51.172618 containerd[1442]: time="2025-05-15T00:36:51.172031293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:36:51.172618 containerd[1442]: time="2025-05-15T00:36:51.172052393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:36:51.172618 containerd[1442]: time="2025-05-15T00:36:51.172125301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:36:51.189834 systemd[1]: Started cri-containerd-26db686d54dde642a09efac1c73af2fb4c382e786ad5cf43c9a4a45d2ec090c5.scope - libcontainer container 26db686d54dde642a09efac1c73af2fb4c382e786ad5cf43c9a4a45d2ec090c5. May 15 00:36:51.190906 systemd[1]: Started cri-containerd-cb8e050640db89d661ec8345191c1d1728ad119c3a7000c4f26c5dc2c0e1b47a.scope - libcontainer container cb8e050640db89d661ec8345191c1d1728ad119c3a7000c4f26c5dc2c0e1b47a. May 15 00:36:51.193535 systemd[1]: Started cri-containerd-5e57e074baf52738f600033cb8279b50aa79d9881d759fcb169848e0dbb9d6a4.scope - libcontainer container 5e57e074baf52738f600033cb8279b50aa79d9881d759fcb169848e0dbb9d6a4. May 15 00:36:51.203912 kubelet[2179]: W0515 00:36:51.203846 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.203912 kubelet[2179]: E0515 00:36:51.203907 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.154:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.224394 containerd[1442]: time="2025-05-15T00:36:51.221724423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b4bf5f2b890f1aab3c63d83134a91619,Namespace:kube-system,Attempt:0,} returns sandbox id \"26db686d54dde642a09efac1c73af2fb4c382e786ad5cf43c9a4a45d2ec090c5\"" May 15 00:36:51.224496 kubelet[2179]: E0515 00:36:51.223233 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:51.226187 containerd[1442]: time="2025-05-15T00:36:51.226158636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb8e050640db89d661ec8345191c1d1728ad119c3a7000c4f26c5dc2c0e1b47a\"" May 15 00:36:51.226494 containerd[1442]: time="2025-05-15T00:36:51.226455676Z" level=info msg="CreateContainer within sandbox \"26db686d54dde642a09efac1c73af2fb4c382e786ad5cf43c9a4a45d2ec090c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:36:51.226886 kubelet[2179]: E0515 00:36:51.226870 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:51.228999 containerd[1442]: time="2025-05-15T00:36:51.228915391Z" level=info msg="CreateContainer within sandbox \"cb8e050640db89d661ec8345191c1d1728ad119c3a7000c4f26c5dc2c0e1b47a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:36:51.231876 containerd[1442]: time="2025-05-15T00:36:51.231796943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e57e074baf52738f600033cb8279b50aa79d9881d759fcb169848e0dbb9d6a4\"" May 15 00:36:51.232308 kubelet[2179]: E0515 00:36:51.232277 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:51.234011 containerd[1442]: time="2025-05-15T00:36:51.233972710Z" level=info msg="CreateContainer within sandbox \"5e57e074baf52738f600033cb8279b50aa79d9881d759fcb169848e0dbb9d6a4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:36:51.243607 containerd[1442]: time="2025-05-15T00:36:51.243572385Z" level=info msg="CreateContainer within sandbox \"26db686d54dde642a09efac1c73af2fb4c382e786ad5cf43c9a4a45d2ec090c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc26cd10d07cd2b2db53f8f3376697eff5dae3dbb7e3f3cc6e962feb55be41cb\"" May 15 00:36:51.244130 containerd[1442]: time="2025-05-15T00:36:51.244100082Z" level=info msg="CreateContainer within sandbox \"cb8e050640db89d661ec8345191c1d1728ad119c3a7000c4f26c5dc2c0e1b47a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be0682e3664f93a91e5e4388445d93a4ae654d1c5c768328dd6d3a56ae44cb0d\"" May 15 00:36:51.244401 containerd[1442]: time="2025-05-15T00:36:51.244125906Z" level=info msg="StartContainer for \"bc26cd10d07cd2b2db53f8f3376697eff5dae3dbb7e3f3cc6e962feb55be41cb\"" May 15 00:36:51.244637 containerd[1442]: time="2025-05-15T00:36:51.244611483Z" level=info msg="StartContainer for \"be0682e3664f93a91e5e4388445d93a4ae654d1c5c768328dd6d3a56ae44cb0d\"" May 15 00:36:51.249420 containerd[1442]: time="2025-05-15T00:36:51.249385777Z" level=info msg="CreateContainer within sandbox \"5e57e074baf52738f600033cb8279b50aa79d9881d759fcb169848e0dbb9d6a4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"abb8efe5dd2323d2fb18aa7caca3a0c90e733e33e5a56ce077a0ae725eefaf11\"" May 15 00:36:51.249919 containerd[1442]: time="2025-05-15T00:36:51.249894255Z" level=info msg="StartContainer for \"abb8efe5dd2323d2fb18aa7caca3a0c90e733e33e5a56ce077a0ae725eefaf11\"" May 15 00:36:51.272887 systemd[1]: Started cri-containerd-bc26cd10d07cd2b2db53f8f3376697eff5dae3dbb7e3f3cc6e962feb55be41cb.scope - libcontainer container bc26cd10d07cd2b2db53f8f3376697eff5dae3dbb7e3f3cc6e962feb55be41cb. May 15 00:36:51.273819 systemd[1]: Started cri-containerd-be0682e3664f93a91e5e4388445d93a4ae654d1c5c768328dd6d3a56ae44cb0d.scope - libcontainer container be0682e3664f93a91e5e4388445d93a4ae654d1c5c768328dd6d3a56ae44cb0d. May 15 00:36:51.276476 systemd[1]: Started cri-containerd-abb8efe5dd2323d2fb18aa7caca3a0c90e733e33e5a56ce077a0ae725eefaf11.scope - libcontainer container abb8efe5dd2323d2fb18aa7caca3a0c90e733e33e5a56ce077a0ae725eefaf11. May 15 00:36:51.327985 containerd[1442]: time="2025-05-15T00:36:51.323432948Z" level=info msg="StartContainer for \"be0682e3664f93a91e5e4388445d93a4ae654d1c5c768328dd6d3a56ae44cb0d\" returns successfully" May 15 00:36:51.327985 containerd[1442]: time="2025-05-15T00:36:51.323561229Z" level=info msg="StartContainer for \"abb8efe5dd2323d2fb18aa7caca3a0c90e733e33e5a56ce077a0ae725eefaf11\" returns successfully" May 15 00:36:51.327985 containerd[1442]: time="2025-05-15T00:36:51.323600065Z" level=info msg="StartContainer for \"bc26cd10d07cd2b2db53f8f3376697eff5dae3dbb7e3f3cc6e962feb55be41cb\" returns successfully" May 15 00:36:51.387205 kubelet[2179]: W0515 00:36:51.386355 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.387205 kubelet[2179]: E0515 00:36:51.386417 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.154:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.387205 kubelet[2179]: W0515 00:36:51.386854 2179 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.387205 kubelet[2179]: E0515 00:36:51.386894 2179 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.154:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.154:6443: connect: connection refused May 15 00:36:51.420882 kubelet[2179]: E0515 00:36:51.420751 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.154:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.154:6443: connect: connection refused" interval="1.6s" May 15 00:36:51.524269 kubelet[2179]: I0515 00:36:51.524241 2179 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:36:52.040880 kubelet[2179]: E0515 00:36:52.040603 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:52.042068 kubelet[2179]: E0515 00:36:52.042037 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:52.043697 kubelet[2179]: E0515 00:36:52.043654 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:53.048558 kubelet[2179]: E0515 00:36:53.048471 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:53.194722 kubelet[2179]: E0515 00:36:53.194642 2179 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 00:36:53.363470 kubelet[2179]: I0515 00:36:53.363241 2179 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 00:36:53.374546 kubelet[2179]: E0515 00:36:53.374304 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:36:53.475262 kubelet[2179]: E0515 00:36:53.475203 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:36:53.575726 kubelet[2179]: E0515 00:36:53.575672 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:36:53.676344 kubelet[2179]: E0515 00:36:53.676241 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:36:53.776809 kubelet[2179]: E0515 00:36:53.776771 2179 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:36:54.010233 kubelet[2179]: I0515 00:36:54.010194 2179 apiserver.go:52] "Watching apiserver" May 15 00:36:54.017530 kubelet[2179]: I0515 00:36:54.017507 2179 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:36:55.164020 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-7.scope)... May 15 00:36:55.164036 systemd[1]: Reloading... May 15 00:36:55.199404 kubelet[2179]: E0515 00:36:55.199369 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:55.230696 zram_generator::config[2502]: No configuration found. May 15 00:36:55.317353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:36:55.382010 systemd[1]: Reloading finished in 217 ms. May 15 00:36:55.412216 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:55.422157 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:36:55.423368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:55.423410 systemd[1]: kubelet.service: Consumed 1.201s CPU time, 115.3M memory peak, 0B memory swap peak. May 15 00:36:55.435006 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 00:36:55.524390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 00:36:55.527953 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 00:36:55.567693 kubelet[2541]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:36:55.567693 kubelet[2541]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:36:55.567693 kubelet[2541]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:36:55.568082 kubelet[2541]: I0515 00:36:55.567737 2541 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:36:55.571481 kubelet[2541]: I0515 00:36:55.571455 2541 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:36:55.571481 kubelet[2541]: I0515 00:36:55.571479 2541 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:36:55.571651 kubelet[2541]: I0515 00:36:55.571634 2541 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:36:55.572872 kubelet[2541]: I0515 00:36:55.572847 2541 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:36:55.573993 kubelet[2541]: I0515 00:36:55.573974 2541 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:36:55.580120 kubelet[2541]: I0515 00:36:55.580100 2541 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:36:55.580306 kubelet[2541]: I0515 00:36:55.580284 2541 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:36:55.580449 kubelet[2541]: I0515 00:36:55.580309 2541 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:36:55.580527 kubelet[2541]: I0515 00:36:55.580456 2541 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:36:55.580527 kubelet[2541]: I0515 00:36:55.580464 2541 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:36:55.580527 kubelet[2541]: I0515 00:36:55.580492 2541 state_mem.go:36] "Initialized new in-memory state store" May 15 00:36:55.580593 kubelet[2541]: I0515 00:36:55.580587 2541 kubelet.go:400] "Attempting to sync node with API server" May 15 00:36:55.580616 kubelet[2541]: I0515 00:36:55.580598 2541 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:36:55.580636 kubelet[2541]: I0515 00:36:55.580622 2541 kubelet.go:312] "Adding apiserver pod source" May 15 00:36:55.580636 kubelet[2541]: I0515 00:36:55.580635 2541 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:36:55.581330 kubelet[2541]: I0515 00:36:55.581224 2541 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 15 00:36:55.581397 kubelet[2541]: I0515 00:36:55.581379 2541 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:36:55.581762 kubelet[2541]: I0515 00:36:55.581734 2541 server.go:1264] "Started kubelet" May 15 00:36:55.582462 kubelet[2541]: I0515 00:36:55.582420 2541 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:36:55.582847 kubelet[2541]: I0515 00:36:55.582828 2541 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:36:55.582961 kubelet[2541]: I0515 00:36:55.582943 2541 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:36:55.583809 kubelet[2541]: I0515 00:36:55.583787 2541 server.go:455] "Adding debug handlers to kubelet server" May 15 00:36:55.584593 kubelet[2541]: I0515 00:36:55.582994 2541 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:36:55.584754 kubelet[2541]: I0515 00:36:55.584731 2541 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:36:55.584875 kubelet[2541]: I0515 00:36:55.584858 2541 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:36:55.585005 kubelet[2541]: I0515 00:36:55.584986 2541 reconciler.go:26] "Reconciler: start to sync state" May 15 00:36:55.585854 kubelet[2541]: I0515 00:36:55.585828 2541 factory.go:221] Registration of the systemd container factory successfully May 15 00:36:55.585962 kubelet[2541]: I0515 00:36:55.585930 2541 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:36:55.586773 kubelet[2541]: E0515 00:36:55.586745 2541 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:36:55.587318 kubelet[2541]: I0515 00:36:55.587296 2541 factory.go:221] Registration of the containerd container factory successfully May 15 00:36:55.593674 kubelet[2541]: I0515 00:36:55.592584 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:36:55.593674 kubelet[2541]: I0515 00:36:55.593444 2541 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:36:55.593674 kubelet[2541]: I0515 00:36:55.593467 2541 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:36:55.593674 kubelet[2541]: I0515 00:36:55.593481 2541 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:36:55.593674 kubelet[2541]: E0515 00:36:55.593515 2541 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:36:55.651565 kubelet[2541]: I0515 00:36:55.651527 2541 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:36:55.651565 kubelet[2541]: I0515 00:36:55.651546 2541 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:36:55.651565 kubelet[2541]: I0515 00:36:55.651565 2541 state_mem.go:36] "Initialized new in-memory state store" May 15 00:36:55.651764 kubelet[2541]: I0515 00:36:55.651736 2541 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:36:55.651764 kubelet[2541]: I0515 00:36:55.651746 2541 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:36:55.651764 kubelet[2541]: I0515 00:36:55.651764 2541 policy_none.go:49] "None policy: Start" May 15 00:36:55.652428 kubelet[2541]: I0515 00:36:55.652405 2541 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:36:55.652428 kubelet[2541]: I0515 00:36:55.652424 2541 state_mem.go:35] "Initializing new in-memory state store" May 15 00:36:55.652548 kubelet[2541]: I0515 00:36:55.652534 2541 state_mem.go:75] "Updated machine memory state" May 15 00:36:55.656067 kubelet[2541]: I0515 00:36:55.656046 2541 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:36:55.656245 kubelet[2541]: I0515 00:36:55.656194 2541 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:36:55.656304 kubelet[2541]: I0515 00:36:55.656288 2541 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:36:55.688807 kubelet[2541]: I0515 00:36:55.688726 2541 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:36:55.693727 kubelet[2541]: I0515 00:36:55.693655 2541 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 00:36:55.694440 kubelet[2541]: I0515 00:36:55.693901 2541 topology_manager.go:215] "Topology Admit Handler" podUID="b4bf5f2b890f1aab3c63d83134a91619" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 00:36:55.694440 kubelet[2541]: I0515 00:36:55.693944 2541 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 00:36:55.696331 kubelet[2541]: I0515 00:36:55.696310 2541 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 00:36:55.696397 kubelet[2541]: I0515 00:36:55.696376 2541 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 00:36:55.699875 kubelet[2541]: E0515 00:36:55.699837 2541 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:36:55.786449 kubelet[2541]: I0515 00:36:55.786400 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:55.786568 kubelet[2541]: I0515 00:36:55.786443 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:55.786568 kubelet[2541]: I0515 00:36:55.786491 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf5f2b890f1aab3c63d83134a91619-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4bf5f2b890f1aab3c63d83134a91619\") " pod="kube-system/kube-apiserver-localhost" May 15 00:36:55.786568 kubelet[2541]: I0515 00:36:55.786508 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:55.786568 kubelet[2541]: I0515 00:36:55.786526 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4bf5f2b890f1aab3c63d83134a91619-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b4bf5f2b890f1aab3c63d83134a91619\") " pod="kube-system/kube-apiserver-localhost" May 15 00:36:55.786568 kubelet[2541]: I0515 00:36:55.786544 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:55.786723 kubelet[2541]: I0515 00:36:55.786560 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:36:55.786723 kubelet[2541]: I0515 00:36:55.786576 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 00:36:55.786723 kubelet[2541]: I0515 00:36:55.786591 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf5f2b890f1aab3c63d83134a91619-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b4bf5f2b890f1aab3c63d83134a91619\") " pod="kube-system/kube-apiserver-localhost" May 15 00:36:55.998448 kubelet[2541]: E0515 00:36:55.998423 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:56.001025 kubelet[2541]: E0515 00:36:56.000949 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:56.001025 kubelet[2541]: E0515 00:36:56.000968 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:56.168958 sudo[2576]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 00:36:56.169228 sudo[2576]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 00:36:56.581319 kubelet[2541]: I0515 00:36:56.581285 2541 apiserver.go:52] "Watching apiserver" May 15 00:36:56.585281 kubelet[2541]: I0515 00:36:56.585232 2541 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:36:56.590991 sudo[2576]: pam_unix(sudo:session): session closed for user root May 15 00:36:56.628543 kubelet[2541]: E0515 00:36:56.628506 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:56.629356 kubelet[2541]: E0515 00:36:56.629296 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:56.638167 kubelet[2541]: E0515 00:36:56.637606 2541 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:36:56.638167 kubelet[2541]: E0515 00:36:56.638066 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:56.656034 kubelet[2541]: I0515 00:36:56.655200 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.655185188 podStartE2EDuration="1.655185188s" podCreationTimestamp="2025-05-15 00:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:36:56.655102428 +0000 UTC m=+1.123975519" watchObservedRunningTime="2025-05-15 00:36:56.655185188 +0000 UTC m=+1.124058199" May 15 00:36:56.664017 kubelet[2541]: I0515 00:36:56.663708 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6636948980000001 podStartE2EDuration="1.663694898s" podCreationTimestamp="2025-05-15 00:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:36:56.663273935 +0000 UTC m=+1.132147026" watchObservedRunningTime="2025-05-15 00:36:56.663694898 +0000 UTC m=+1.132567909" May 15 00:36:56.671597 kubelet[2541]: I0515 00:36:56.671533 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6715212780000002 podStartE2EDuration="1.671521278s" podCreationTimestamp="2025-05-15 00:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:36:56.669909099 +0000 UTC m=+1.138782110" watchObservedRunningTime="2025-05-15 00:36:56.671521278 +0000 UTC m=+1.140394289" May 15 00:36:57.630628 kubelet[2541]: E0515 00:36:57.630523 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:36:58.286334 sudo[1619]: pam_unix(sudo:session): session closed for user root May 15 00:36:58.288912 sshd[1616]: pam_unix(sshd:session): session closed for user core May 15 00:36:58.291999 systemd[1]: sshd@6-10.0.0.154:22-10.0.0.1:60344.service: Deactivated successfully. May 15 00:36:58.293649 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:36:58.293847 systemd[1]: session-7.scope: Consumed 7.053s CPU time, 190.9M memory peak, 0B memory swap peak. May 15 00:36:58.295614 systemd-logind[1419]: Session 7 logged out. Waiting for processes to exit. May 15 00:36:58.296482 systemd-logind[1419]: Removed session 7. May 15 00:36:59.146711 kubelet[2541]: E0515 00:36:59.146641 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:01.307797 kubelet[2541]: E0515 00:37:01.307725 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:01.635601 kubelet[2541]: E0515 00:37:01.635425 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:01.736517 kubelet[2541]: E0515 00:37:01.736479 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:02.636973 kubelet[2541]: E0515 00:37:02.636882 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:09.153476 kubelet[2541]: E0515 00:37:09.153443 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:11.440556 kubelet[2541]: I0515 00:37:11.440519 2541 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:37:11.440949 containerd[1442]: time="2025-05-15T00:37:11.440839579Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:37:11.441145 kubelet[2541]: I0515 00:37:11.440991 2541 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:37:12.151273 update_engine[1423]: I20250515 00:37:12.150696 1423 update_attempter.cc:509] Updating boot flags... May 15 00:37:12.184695 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2626) May 15 00:37:12.211691 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2629) May 15 00:37:12.248750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2629) May 15 00:37:12.380294 kubelet[2541]: I0515 00:37:12.380230 2541 topology_manager.go:215] "Topology Admit Handler" podUID="edcd054f-eb37-45f2-a8ad-0b35661f5b08" podNamespace="kube-system" podName="kube-proxy-cczrm" May 15 00:37:12.390307 kubelet[2541]: I0515 00:37:12.390076 2541 topology_manager.go:215] "Topology Admit Handler" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" podNamespace="kube-system" podName="cilium-pz8kz" May 15 00:37:12.391519 systemd[1]: Created slice kubepods-besteffort-podedcd054f_eb37_45f2_a8ad_0b35661f5b08.slice - libcontainer container kubepods-besteffort-podedcd054f_eb37_45f2_a8ad_0b35661f5b08.slice. May 15 00:37:12.403815 kubelet[2541]: I0515 00:37:12.403726 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpwq8\" (UniqueName: \"kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-kube-api-access-qpwq8\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.404389 kubelet[2541]: I0515 00:37:12.404358 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edcd054f-eb37-45f2-a8ad-0b35661f5b08-xtables-lock\") pod \"kube-proxy-cczrm\" (UID: \"edcd054f-eb37-45f2-a8ad-0b35661f5b08\") " pod="kube-system/kube-proxy-cczrm" May 15 00:37:12.404522 kubelet[2541]: I0515 00:37:12.404508 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-lib-modules\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.404780 kubelet[2541]: I0515 00:37:12.404764 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-xtables-lock\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405072 kubelet[2541]: I0515 00:37:12.404867 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-hubble-tls\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405072 kubelet[2541]: I0515 00:37:12.404888 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-kernel\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405072 kubelet[2541]: I0515 00:37:12.404908 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55tx\" (UniqueName: \"kubernetes.io/projected/edcd054f-eb37-45f2-a8ad-0b35661f5b08-kube-api-access-j55tx\") pod \"kube-proxy-cczrm\" (UID: \"edcd054f-eb37-45f2-a8ad-0b35661f5b08\") " pod="kube-system/kube-proxy-cczrm" May 15 00:37:12.405072 kubelet[2541]: I0515 00:37:12.404923 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-bpf-maps\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405072 kubelet[2541]: I0515 00:37:12.404947 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-hostproc\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405072 kubelet[2541]: I0515 00:37:12.404962 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-cgroup\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405301 kubelet[2541]: I0515 00:37:12.404989 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-config-path\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405301 kubelet[2541]: I0515 00:37:12.405004 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-run\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405301 kubelet[2541]: I0515 00:37:12.405051 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cni-path\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405301 kubelet[2541]: I0515 00:37:12.405100 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-etc-cni-netd\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405301 kubelet[2541]: I0515 00:37:12.405149 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-net\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405301 kubelet[2541]: I0515 00:37:12.405200 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65e55078-37b8-4336-8cb3-2a90d99bbb85-clustermesh-secrets\") pod \"cilium-pz8kz\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " pod="kube-system/cilium-pz8kz" May 15 00:37:12.405431 kubelet[2541]: I0515 00:37:12.405235 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/edcd054f-eb37-45f2-a8ad-0b35661f5b08-kube-proxy\") pod \"kube-proxy-cczrm\" (UID: \"edcd054f-eb37-45f2-a8ad-0b35661f5b08\") " pod="kube-system/kube-proxy-cczrm" May 15 00:37:12.405431 kubelet[2541]: I0515 00:37:12.405262 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edcd054f-eb37-45f2-a8ad-0b35661f5b08-lib-modules\") pod \"kube-proxy-cczrm\" (UID: \"edcd054f-eb37-45f2-a8ad-0b35661f5b08\") " pod="kube-system/kube-proxy-cczrm" May 15 00:37:12.408478 systemd[1]: Created slice kubepods-burstable-pod65e55078_37b8_4336_8cb3_2a90d99bbb85.slice - libcontainer container kubepods-burstable-pod65e55078_37b8_4336_8cb3_2a90d99bbb85.slice. May 15 00:37:12.471963 kubelet[2541]: I0515 00:37:12.471922 2541 topology_manager.go:215] "Topology Admit Handler" podUID="4a8ef949-5503-4273-ad53-3492fd0a5b7a" podNamespace="kube-system" podName="cilium-operator-599987898-qfr6r" May 15 00:37:12.480283 systemd[1]: Created slice kubepods-besteffort-pod4a8ef949_5503_4273_ad53_3492fd0a5b7a.slice - libcontainer container kubepods-besteffort-pod4a8ef949_5503_4273_ad53_3492fd0a5b7a.slice. May 15 00:37:12.506943 kubelet[2541]: I0515 00:37:12.506904 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a8ef949-5503-4273-ad53-3492fd0a5b7a-cilium-config-path\") pod \"cilium-operator-599987898-qfr6r\" (UID: \"4a8ef949-5503-4273-ad53-3492fd0a5b7a\") " pod="kube-system/cilium-operator-599987898-qfr6r" May 15 00:37:12.508160 kubelet[2541]: I0515 00:37:12.508081 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slrp\" (UniqueName: \"kubernetes.io/projected/4a8ef949-5503-4273-ad53-3492fd0a5b7a-kube-api-access-9slrp\") pod \"cilium-operator-599987898-qfr6r\" (UID: \"4a8ef949-5503-4273-ad53-3492fd0a5b7a\") " pod="kube-system/cilium-operator-599987898-qfr6r" May 15 00:37:12.701596 kubelet[2541]: E0515 00:37:12.701493 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:12.712696 kubelet[2541]: E0515 00:37:12.712652 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:12.713789 containerd[1442]: time="2025-05-15T00:37:12.713750893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pz8kz,Uid:65e55078-37b8-4336-8cb3-2a90d99bbb85,Namespace:kube-system,Attempt:0,}" May 15 00:37:12.716214 containerd[1442]: time="2025-05-15T00:37:12.715835538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cczrm,Uid:edcd054f-eb37-45f2-a8ad-0b35661f5b08,Namespace:kube-system,Attempt:0,}" May 15 00:37:12.736403 containerd[1442]: time="2025-05-15T00:37:12.735971497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:37:12.736403 containerd[1442]: time="2025-05-15T00:37:12.736039429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:37:12.736403 containerd[1442]: time="2025-05-15T00:37:12.736054231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:12.736403 containerd[1442]: time="2025-05-15T00:37:12.736141206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:12.744225 containerd[1442]: time="2025-05-15T00:37:12.744109559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:37:12.744225 containerd[1442]: time="2025-05-15T00:37:12.744166689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:37:12.744225 containerd[1442]: time="2025-05-15T00:37:12.744177891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:12.744433 containerd[1442]: time="2025-05-15T00:37:12.744269147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:12.756876 systemd[1]: Started cri-containerd-cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006.scope - libcontainer container cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006. May 15 00:37:12.760810 systemd[1]: Started cri-containerd-8d31b6b71f1c499388c1df120e883ea00e6044bbff53a65ce1901cbaa7150dc6.scope - libcontainer container 8d31b6b71f1c499388c1df120e883ea00e6044bbff53a65ce1901cbaa7150dc6. May 15 00:37:12.783263 containerd[1442]: time="2025-05-15T00:37:12.783104214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pz8kz,Uid:65e55078-37b8-4336-8cb3-2a90d99bbb85,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\"" May 15 00:37:12.786025 kubelet[2541]: E0515 00:37:12.785720 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:12.786223 containerd[1442]: time="2025-05-15T00:37:12.786175270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qfr6r,Uid:4a8ef949-5503-4273-ad53-3492fd0a5b7a,Namespace:kube-system,Attempt:0,}" May 15 00:37:12.788174 kubelet[2541]: E0515 00:37:12.788147 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:12.789496 containerd[1442]: time="2025-05-15T00:37:12.789461325Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 00:37:12.790885 containerd[1442]: time="2025-05-15T00:37:12.790547074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cczrm,Uid:edcd054f-eb37-45f2-a8ad-0b35661f5b08,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d31b6b71f1c499388c1df120e883ea00e6044bbff53a65ce1901cbaa7150dc6\"" May 15 00:37:12.792598 kubelet[2541]: E0515 00:37:12.791854 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:12.799948 containerd[1442]: time="2025-05-15T00:37:12.799855781Z" level=info msg="CreateContainer within sandbox \"8d31b6b71f1c499388c1df120e883ea00e6044bbff53a65ce1901cbaa7150dc6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:37:12.820101 containerd[1442]: time="2025-05-15T00:37:12.820035668Z" level=info msg="CreateContainer within sandbox \"8d31b6b71f1c499388c1df120e883ea00e6044bbff53a65ce1901cbaa7150dc6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d88685bc5bd66163262afff45328baa50463497a14e30840cffd964f8e9c6c1\"" May 15 00:37:12.820577 containerd[1442]: time="2025-05-15T00:37:12.820212019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:37:12.820635 containerd[1442]: time="2025-05-15T00:37:12.820576122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:37:12.821009 containerd[1442]: time="2025-05-15T00:37:12.820927704Z" level=info msg="StartContainer for \"8d88685bc5bd66163262afff45328baa50463497a14e30840cffd964f8e9c6c1\"" May 15 00:37:12.821009 containerd[1442]: time="2025-05-15T00:37:12.820928504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:12.821009 containerd[1442]: time="2025-05-15T00:37:12.821293768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:12.840897 systemd[1]: Started cri-containerd-916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78.scope - libcontainer container 916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78. May 15 00:37:12.843764 systemd[1]: Started cri-containerd-8d88685bc5bd66163262afff45328baa50463497a14e30840cffd964f8e9c6c1.scope - libcontainer container 8d88685bc5bd66163262afff45328baa50463497a14e30840cffd964f8e9c6c1. May 15 00:37:12.873093 containerd[1442]: time="2025-05-15T00:37:12.873047452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qfr6r,Uid:4a8ef949-5503-4273-ad53-3492fd0a5b7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78\"" May 15 00:37:12.873654 kubelet[2541]: E0515 00:37:12.873628 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:12.877116 containerd[1442]: time="2025-05-15T00:37:12.877079157Z" level=info msg="StartContainer for \"8d88685bc5bd66163262afff45328baa50463497a14e30840cffd964f8e9c6c1\" returns successfully" May 15 00:37:13.660691 kubelet[2541]: E0515 00:37:13.660602 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:13.671773 kubelet[2541]: I0515 00:37:13.671588 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cczrm" podStartSLOduration=1.67157092 podStartE2EDuration="1.67157092s" podCreationTimestamp="2025-05-15 00:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:37:13.671218381 +0000 UTC m=+18.140091392" watchObservedRunningTime="2025-05-15 00:37:13.67157092 +0000 UTC m=+18.140443931" May 15 00:37:17.061250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031228146.mount: Deactivated successfully. May 15 00:37:23.452326 containerd[1442]: time="2025-05-15T00:37:23.452274059Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:37:23.453561 containerd[1442]: time="2025-05-15T00:37:23.453513830Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 00:37:23.454402 containerd[1442]: time="2025-05-15T00:37:23.454370041Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:37:23.456601 containerd[1442]: time="2025-05-15T00:37:23.456364892Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.66565371s" May 15 00:37:23.456601 containerd[1442]: time="2025-05-15T00:37:23.456402736Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 00:37:23.458913 containerd[1442]: time="2025-05-15T00:37:23.458853675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 00:37:23.460228 containerd[1442]: time="2025-05-15T00:37:23.460193377Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:37:23.480781 containerd[1442]: time="2025-05-15T00:37:23.480729631Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\"" May 15 00:37:23.481271 containerd[1442]: time="2025-05-15T00:37:23.481245206Z" level=info msg="StartContainer for \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\"" May 15 00:37:23.516850 systemd[1]: Started cri-containerd-a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222.scope - libcontainer container a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222. May 15 00:37:23.551620 containerd[1442]: time="2025-05-15T00:37:23.551555489Z" level=info msg="StartContainer for \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\" returns successfully" May 15 00:37:23.689400 kubelet[2541]: E0515 00:37:23.689360 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:23.696369 systemd[1]: cri-containerd-a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222.scope: Deactivated successfully. May 15 00:37:23.833074 containerd[1442]: time="2025-05-15T00:37:23.824620437Z" level=info msg="shim disconnected" id=a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222 namespace=k8s.io May 15 00:37:23.833074 containerd[1442]: time="2025-05-15T00:37:23.832881352Z" level=warning msg="cleaning up after shim disconnected" id=a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222 namespace=k8s.io May 15 00:37:23.833074 containerd[1442]: time="2025-05-15T00:37:23.832898273Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:37:24.412267 systemd[1]: Started sshd@7-10.0.0.154:22-10.0.0.1:40972.service - OpenSSH per-connection server daemon (10.0.0.1:40972). May 15 00:37:24.463155 sshd[3008]: Accepted publickey for core from 10.0.0.1 port 40972 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:24.464881 sshd[3008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:24.469682 systemd-logind[1419]: New session 8 of user core. May 15 00:37:24.478919 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 00:37:24.480862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222-rootfs.mount: Deactivated successfully. May 15 00:37:24.633312 sshd[3008]: pam_unix(sshd:session): session closed for user core May 15 00:37:24.637067 systemd[1]: sshd@7-10.0.0.154:22-10.0.0.1:40972.service: Deactivated successfully. May 15 00:37:24.639327 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:37:24.640382 systemd-logind[1419]: Session 8 logged out. Waiting for processes to exit. May 15 00:37:24.641481 systemd-logind[1419]: Removed session 8. May 15 00:37:24.694468 kubelet[2541]: E0515 00:37:24.693174 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:24.700848 containerd[1442]: time="2025-05-15T00:37:24.700805160Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:37:24.720247 containerd[1442]: time="2025-05-15T00:37:24.720190732Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\"" May 15 00:37:24.722976 containerd[1442]: time="2025-05-15T00:37:24.722929050Z" level=info msg="StartContainer for \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\"" May 15 00:37:24.765933 systemd[1]: Started cri-containerd-4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489.scope - libcontainer container 4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489. May 15 00:37:24.811341 containerd[1442]: time="2025-05-15T00:37:24.811213989Z" level=info msg="StartContainer for \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\" returns successfully" May 15 00:37:24.825041 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:37:24.825605 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 00:37:24.825703 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 00:37:24.831932 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 00:37:24.832118 systemd[1]: cri-containerd-4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489.scope: Deactivated successfully. May 15 00:37:24.869423 containerd[1442]: time="2025-05-15T00:37:24.869111397Z" level=info msg="shim disconnected" id=4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489 namespace=k8s.io May 15 00:37:24.869423 containerd[1442]: time="2025-05-15T00:37:24.869172883Z" level=warning msg="cleaning up after shim disconnected" id=4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489 namespace=k8s.io May 15 00:37:24.869423 containerd[1442]: time="2025-05-15T00:37:24.869181564Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:37:24.876972 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 00:37:24.901496 containerd[1442]: time="2025-05-15T00:37:24.901442845Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:37:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 00:37:24.958538 containerd[1442]: time="2025-05-15T00:37:24.957440700Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:37:24.959503 containerd[1442]: time="2025-05-15T00:37:24.959458305Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 00:37:24.960551 containerd[1442]: time="2025-05-15T00:37:24.960516173Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 00:37:24.961838 containerd[1442]: time="2025-05-15T00:37:24.961802704Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.502907664s" May 15 00:37:24.961949 containerd[1442]: time="2025-05-15T00:37:24.961929876Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 00:37:24.964145 containerd[1442]: time="2025-05-15T00:37:24.964120139Z" level=info msg="CreateContainer within sandbox \"916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 00:37:24.973470 containerd[1442]: time="2025-05-15T00:37:24.973434006Z" level=info msg="CreateContainer within sandbox \"916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\"" May 15 00:37:24.974232 containerd[1442]: time="2025-05-15T00:37:24.974196364Z" level=info msg="StartContainer for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\"" May 15 00:37:25.002873 systemd[1]: Started cri-containerd-6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a.scope - libcontainer container 6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a. May 15 00:37:25.031108 containerd[1442]: time="2025-05-15T00:37:25.031051867Z" level=info msg="StartContainer for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" returns successfully" May 15 00:37:25.484721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489-rootfs.mount: Deactivated successfully. May 15 00:37:25.699336 kubelet[2541]: E0515 00:37:25.699295 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:25.702471 kubelet[2541]: E0515 00:37:25.702441 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:25.704180 containerd[1442]: time="2025-05-15T00:37:25.704137412Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:37:25.708922 kubelet[2541]: I0515 00:37:25.707597 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qfr6r" podStartSLOduration=1.620370673 podStartE2EDuration="13.707582468s" podCreationTimestamp="2025-05-15 00:37:12 +0000 UTC" firstStartedPulling="2025-05-15 00:37:12.875506402 +0000 UTC m=+17.344379413" lastFinishedPulling="2025-05-15 00:37:24.962718197 +0000 UTC m=+29.431591208" observedRunningTime="2025-05-15 00:37:25.707268078 +0000 UTC m=+30.176141169" watchObservedRunningTime="2025-05-15 00:37:25.707582468 +0000 UTC m=+30.176455479" May 15 00:37:25.734144 containerd[1442]: time="2025-05-15T00:37:25.734100222Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\"" May 15 00:37:25.734704 containerd[1442]: time="2025-05-15T00:37:25.734642995Z" level=info msg="StartContainer for \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\"" May 15 00:37:25.769824 systemd[1]: Started cri-containerd-b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589.scope - libcontainer container b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589. May 15 00:37:25.832227 containerd[1442]: time="2025-05-15T00:37:25.832100166Z" level=info msg="StartContainer for \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\" returns successfully" May 15 00:37:25.843108 systemd[1]: cri-containerd-b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589.scope: Deactivated successfully. May 15 00:37:25.863504 containerd[1442]: time="2025-05-15T00:37:25.863297456Z" level=info msg="shim disconnected" id=b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589 namespace=k8s.io May 15 00:37:25.863504 containerd[1442]: time="2025-05-15T00:37:25.863348942Z" level=warning msg="cleaning up after shim disconnected" id=b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589 namespace=k8s.io May 15 00:37:25.863504 containerd[1442]: time="2025-05-15T00:37:25.863357142Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:37:26.480768 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589-rootfs.mount: Deactivated successfully. May 15 00:37:26.706789 kubelet[2541]: E0515 00:37:26.705807 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:26.707163 kubelet[2541]: E0515 00:37:26.707113 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:26.712747 containerd[1442]: time="2025-05-15T00:37:26.712635352Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:37:26.734490 containerd[1442]: time="2025-05-15T00:37:26.734316873Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\"" May 15 00:37:26.735009 containerd[1442]: time="2025-05-15T00:37:26.734979375Z" level=info msg="StartContainer for \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\"" May 15 00:37:26.765837 systemd[1]: Started cri-containerd-6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d.scope - libcontainer container 6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d. May 15 00:37:26.786264 systemd[1]: cri-containerd-6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d.scope: Deactivated successfully. May 15 00:37:26.791122 containerd[1442]: time="2025-05-15T00:37:26.790943963Z" level=info msg="StartContainer for \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\" returns successfully" May 15 00:37:26.807946 containerd[1442]: time="2025-05-15T00:37:26.807882838Z" level=info msg="shim disconnected" id=6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d namespace=k8s.io May 15 00:37:26.807946 containerd[1442]: time="2025-05-15T00:37:26.807943764Z" level=warning msg="cleaning up after shim disconnected" id=6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d namespace=k8s.io May 15 00:37:26.807946 containerd[1442]: time="2025-05-15T00:37:26.807953285Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:37:27.709697 kubelet[2541]: E0515 00:37:27.709071 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:27.711370 containerd[1442]: time="2025-05-15T00:37:27.711330541Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:37:27.730454 containerd[1442]: time="2025-05-15T00:37:27.730250217Z" level=info msg="CreateContainer within sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\"" May 15 00:37:27.734867 containerd[1442]: time="2025-05-15T00:37:27.734823032Z" level=info msg="StartContainer for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\"" May 15 00:37:27.773836 systemd[1]: Started cri-containerd-d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a.scope - libcontainer container d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a. May 15 00:37:27.795806 containerd[1442]: time="2025-05-15T00:37:27.795766199Z" level=info msg="StartContainer for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" returns successfully" May 15 00:37:27.952634 kubelet[2541]: I0515 00:37:27.951783 2541 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 00:37:28.006205 kubelet[2541]: I0515 00:37:28.006162 2541 topology_manager.go:215] "Topology Admit Handler" podUID="cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c4pzd" May 15 00:37:28.010557 kubelet[2541]: I0515 00:37:28.010224 2541 topology_manager.go:215] "Topology Admit Handler" podUID="d904020b-b736-4de8-a109-348f48e2d3cf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kvdzl" May 15 00:37:28.019313 systemd[1]: Created slice kubepods-burstable-podcd2f4ada_2b7d_405f_95ff_95aa1dbfc9da.slice - libcontainer container kubepods-burstable-podcd2f4ada_2b7d_405f_95ff_95aa1dbfc9da.slice. May 15 00:37:28.029391 systemd[1]: Created slice kubepods-burstable-podd904020b_b736_4de8_a109_348f48e2d3cf.slice - libcontainer container kubepods-burstable-podd904020b_b736_4de8_a109_348f48e2d3cf.slice. May 15 00:37:28.037067 kubelet[2541]: I0515 00:37:28.036918 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg4lr\" (UniqueName: \"kubernetes.io/projected/d904020b-b736-4de8-a109-348f48e2d3cf-kube-api-access-gg4lr\") pod \"coredns-7db6d8ff4d-kvdzl\" (UID: \"d904020b-b736-4de8-a109-348f48e2d3cf\") " pod="kube-system/coredns-7db6d8ff4d-kvdzl" May 15 00:37:28.037067 kubelet[2541]: I0515 00:37:28.036958 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da-config-volume\") pod \"coredns-7db6d8ff4d-c4pzd\" (UID: \"cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da\") " pod="kube-system/coredns-7db6d8ff4d-c4pzd" May 15 00:37:28.037067 kubelet[2541]: I0515 00:37:28.036978 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d904020b-b736-4de8-a109-348f48e2d3cf-config-volume\") pod \"coredns-7db6d8ff4d-kvdzl\" (UID: \"d904020b-b736-4de8-a109-348f48e2d3cf\") " pod="kube-system/coredns-7db6d8ff4d-kvdzl" May 15 00:37:28.037067 kubelet[2541]: I0515 00:37:28.036996 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkx8k\" (UniqueName: \"kubernetes.io/projected/cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da-kube-api-access-lkx8k\") pod \"coredns-7db6d8ff4d-c4pzd\" (UID: \"cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da\") " pod="kube-system/coredns-7db6d8ff4d-c4pzd" May 15 00:37:28.325860 kubelet[2541]: E0515 00:37:28.325741 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:28.327272 containerd[1442]: time="2025-05-15T00:37:28.326581580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c4pzd,Uid:cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da,Namespace:kube-system,Attempt:0,}" May 15 00:37:28.333115 kubelet[2541]: E0515 00:37:28.333089 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:28.333735 containerd[1442]: time="2025-05-15T00:37:28.333638357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kvdzl,Uid:d904020b-b736-4de8-a109-348f48e2d3cf,Namespace:kube-system,Attempt:0,}" May 15 00:37:28.485112 systemd[1]: run-containerd-runc-k8s.io-d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a-runc.SRsK0T.mount: Deactivated successfully. May 15 00:37:28.719420 kubelet[2541]: E0515 00:37:28.719303 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:28.735827 kubelet[2541]: I0515 00:37:28.735762 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pz8kz" podStartSLOduration=6.066201421 podStartE2EDuration="16.735733334s" podCreationTimestamp="2025-05-15 00:37:12 +0000 UTC" firstStartedPulling="2025-05-15 00:37:12.789063015 +0000 UTC m=+17.257936026" lastFinishedPulling="2025-05-15 00:37:23.458594928 +0000 UTC m=+27.927467939" observedRunningTime="2025-05-15 00:37:28.735714652 +0000 UTC m=+33.204587743" watchObservedRunningTime="2025-05-15 00:37:28.735733334 +0000 UTC m=+33.204606345" May 15 00:37:29.645134 systemd[1]: Started sshd@8-10.0.0.154:22-10.0.0.1:40974.service - OpenSSH per-connection server daemon (10.0.0.1:40974). May 15 00:37:29.688762 sshd[3409]: Accepted publickey for core from 10.0.0.1 port 40974 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:29.690749 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:29.694418 systemd-logind[1419]: New session 9 of user core. May 15 00:37:29.700882 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 00:37:29.721752 kubelet[2541]: E0515 00:37:29.721386 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:29.811056 sshd[3409]: pam_unix(sshd:session): session closed for user core May 15 00:37:29.815505 systemd[1]: sshd@8-10.0.0.154:22-10.0.0.1:40974.service: Deactivated successfully. May 15 00:37:29.817967 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:37:29.820533 systemd-logind[1419]: Session 9 logged out. Waiting for processes to exit. May 15 00:37:29.821348 systemd-logind[1419]: Removed session 9. May 15 00:37:30.093615 systemd-networkd[1383]: cilium_host: Link UP May 15 00:37:30.094211 systemd-networkd[1383]: cilium_net: Link UP May 15 00:37:30.094853 systemd-networkd[1383]: cilium_net: Gained carrier May 15 00:37:30.095072 systemd-networkd[1383]: cilium_host: Gained carrier May 15 00:37:30.095198 systemd-networkd[1383]: cilium_net: Gained IPv6LL May 15 00:37:30.095369 systemd-networkd[1383]: cilium_host: Gained IPv6LL May 15 00:37:30.178714 systemd-networkd[1383]: cilium_vxlan: Link UP May 15 00:37:30.178899 systemd-networkd[1383]: cilium_vxlan: Gained carrier May 15 00:37:30.523697 kernel: NET: Registered PF_ALG protocol family May 15 00:37:30.723893 kubelet[2541]: E0515 00:37:30.723859 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:31.102831 systemd-networkd[1383]: lxc_health: Link UP May 15 00:37:31.107504 systemd-networkd[1383]: lxc_health: Gained carrier May 15 00:37:31.232902 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL May 15 00:37:31.451738 systemd-networkd[1383]: lxc79a55e511f7b: Link UP May 15 00:37:31.458701 kernel: eth0: renamed from tmpb4244 May 15 00:37:31.465335 systemd-networkd[1383]: lxc79a55e511f7b: Gained carrier May 15 00:37:31.465575 systemd-networkd[1383]: lxc5f0d19ad1f39: Link UP May 15 00:37:31.474700 kernel: eth0: renamed from tmp7300a May 15 00:37:31.481588 systemd-networkd[1383]: lxc5f0d19ad1f39: Gained carrier May 15 00:37:32.512833 systemd-networkd[1383]: lxc79a55e511f7b: Gained IPv6LL May 15 00:37:32.641825 systemd-networkd[1383]: lxc_health: Gained IPv6LL May 15 00:37:32.723008 kubelet[2541]: E0515 00:37:32.722969 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:33.152773 systemd-networkd[1383]: lxc5f0d19ad1f39: Gained IPv6LL May 15 00:37:34.833300 systemd[1]: Started sshd@9-10.0.0.154:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). May 15 00:37:34.867728 sshd[3806]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:34.869489 sshd[3806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:34.875843 systemd-logind[1419]: New session 10 of user core. May 15 00:37:34.883840 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 00:37:34.923027 containerd[1442]: time="2025-05-15T00:37:34.922494285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:37:34.923027 containerd[1442]: time="2025-05-15T00:37:34.922565250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:37:34.923027 containerd[1442]: time="2025-05-15T00:37:34.922580491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:34.923027 containerd[1442]: time="2025-05-15T00:37:34.922687899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:34.944852 systemd[1]: Started cri-containerd-b424491adae1f00ee7adbc7db3944f2c4473e308a098b1dad44c0c6baefb2aa6.scope - libcontainer container b424491adae1f00ee7adbc7db3944f2c4473e308a098b1dad44c0c6baefb2aa6. May 15 00:37:34.952705 containerd[1442]: time="2025-05-15T00:37:34.949379940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:37:34.952705 containerd[1442]: time="2025-05-15T00:37:34.950429096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:37:34.952705 containerd[1442]: time="2025-05-15T00:37:34.950465099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:34.952705 containerd[1442]: time="2025-05-15T00:37:34.950585947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:37:34.963078 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:37:34.968837 systemd[1]: Started cri-containerd-7300a6e1ad2f1188564e926f76dae811ca957123c6f2ac57dd9053ced09d94ec.scope - libcontainer container 7300a6e1ad2f1188564e926f76dae811ca957123c6f2ac57dd9053ced09d94ec. May 15 00:37:34.989120 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:37:35.000407 containerd[1442]: time="2025-05-15T00:37:35.000361210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c4pzd,Uid:cd2f4ada-2b7d-405f-95ff-95aa1dbfc9da,Namespace:kube-system,Attempt:0,} returns sandbox id \"b424491adae1f00ee7adbc7db3944f2c4473e308a098b1dad44c0c6baefb2aa6\"" May 15 00:37:35.002236 kubelet[2541]: E0515 00:37:35.002194 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:35.005697 containerd[1442]: time="2025-05-15T00:37:35.005216711Z" level=info msg="CreateContainer within sandbox \"b424491adae1f00ee7adbc7db3944f2c4473e308a098b1dad44c0c6baefb2aa6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:37:35.013935 containerd[1442]: time="2025-05-15T00:37:35.013885677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kvdzl,Uid:d904020b-b736-4de8-a109-348f48e2d3cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7300a6e1ad2f1188564e926f76dae811ca957123c6f2ac57dd9053ced09d94ec\"" May 15 00:37:35.015255 kubelet[2541]: E0515 00:37:35.014745 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:35.021832 containerd[1442]: time="2025-05-15T00:37:35.021784469Z" level=info msg="CreateContainer within sandbox \"7300a6e1ad2f1188564e926f76dae811ca957123c6f2ac57dd9053ced09d94ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:37:35.033876 containerd[1442]: time="2025-05-15T00:37:35.033831832Z" level=info msg="CreateContainer within sandbox \"b424491adae1f00ee7adbc7db3944f2c4473e308a098b1dad44c0c6baefb2aa6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6250f1fa029f85b11a55cf1d1a92cfb8ca29aae66d6ac9c037590a72f9b21950\"" May 15 00:37:35.034746 containerd[1442]: time="2025-05-15T00:37:35.034718014Z" level=info msg="StartContainer for \"6250f1fa029f85b11a55cf1d1a92cfb8ca29aae66d6ac9c037590a72f9b21950\"" May 15 00:37:35.036675 containerd[1442]: time="2025-05-15T00:37:35.035999704Z" level=info msg="CreateContainer within sandbox \"7300a6e1ad2f1188564e926f76dae811ca957123c6f2ac57dd9053ced09d94ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db16de0b11e1ca43e63fe3855e0de63b561d03261d887c4b1a5e121386ee98ac\"" May 15 00:37:35.037405 containerd[1442]: time="2025-05-15T00:37:35.037286073Z" level=info msg="StartContainer for \"db16de0b11e1ca43e63fe3855e0de63b561d03261d887c4b1a5e121386ee98ac\"" May 15 00:37:35.048239 sshd[3806]: pam_unix(sshd:session): session closed for user core May 15 00:37:35.053373 systemd[1]: sshd@9-10.0.0.154:22-10.0.0.1:37272.service: Deactivated successfully. May 15 00:37:35.056480 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:37:35.058036 systemd-logind[1419]: Session 10 logged out. Waiting for processes to exit. May 15 00:37:35.059336 systemd-logind[1419]: Removed session 10. May 15 00:37:35.072877 systemd[1]: Started cri-containerd-6250f1fa029f85b11a55cf1d1a92cfb8ca29aae66d6ac9c037590a72f9b21950.scope - libcontainer container 6250f1fa029f85b11a55cf1d1a92cfb8ca29aae66d6ac9c037590a72f9b21950. May 15 00:37:35.074763 systemd[1]: Started cri-containerd-db16de0b11e1ca43e63fe3855e0de63b561d03261d887c4b1a5e121386ee98ac.scope - libcontainer container db16de0b11e1ca43e63fe3855e0de63b561d03261d887c4b1a5e121386ee98ac. May 15 00:37:35.114842 containerd[1442]: time="2025-05-15T00:37:35.113364394Z" level=info msg="StartContainer for \"6250f1fa029f85b11a55cf1d1a92cfb8ca29aae66d6ac9c037590a72f9b21950\" returns successfully" May 15 00:37:35.114842 containerd[1442]: time="2025-05-15T00:37:35.113364474Z" level=info msg="StartContainer for \"db16de0b11e1ca43e63fe3855e0de63b561d03261d887c4b1a5e121386ee98ac\" returns successfully" May 15 00:37:35.733992 kubelet[2541]: E0515 00:37:35.733938 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:35.742215 kubelet[2541]: E0515 00:37:35.741190 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:35.750570 kubelet[2541]: I0515 00:37:35.750512 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kvdzl" podStartSLOduration=23.750481153 podStartE2EDuration="23.750481153s" podCreationTimestamp="2025-05-15 00:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:37:35.749983558 +0000 UTC m=+40.218856569" watchObservedRunningTime="2025-05-15 00:37:35.750481153 +0000 UTC m=+40.219354164" May 15 00:37:35.781191 kubelet[2541]: I0515 00:37:35.781123 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c4pzd" podStartSLOduration=23.781105815 podStartE2EDuration="23.781105815s" podCreationTimestamp="2025-05-15 00:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:37:35.766100526 +0000 UTC m=+40.234973537" watchObservedRunningTime="2025-05-15 00:37:35.781105815 +0000 UTC m=+40.249978826" May 15 00:37:35.928034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000338057.mount: Deactivated successfully. May 15 00:37:36.743056 kubelet[2541]: E0515 00:37:36.742919 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:36.743635 kubelet[2541]: E0515 00:37:36.743405 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:37.744458 kubelet[2541]: E0515 00:37:37.744313 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:37.744458 kubelet[2541]: E0515 00:37:37.744391 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:39.783042 kubelet[2541]: I0515 00:37:39.782754 2541 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:37:39.783613 kubelet[2541]: E0515 00:37:39.783594 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:40.061294 systemd[1]: Started sshd@10-10.0.0.154:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). May 15 00:37:40.099283 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:40.101084 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:40.104525 systemd-logind[1419]: New session 11 of user core. May 15 00:37:40.113838 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 00:37:40.232357 sshd[3996]: pam_unix(sshd:session): session closed for user core May 15 00:37:40.243820 systemd[1]: sshd@10-10.0.0.154:22-10.0.0.1:37280.service: Deactivated successfully. May 15 00:37:40.245549 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:37:40.247396 systemd-logind[1419]: Session 11 logged out. Waiting for processes to exit. May 15 00:37:40.249856 systemd[1]: Started sshd@11-10.0.0.154:22-10.0.0.1:37290.service - OpenSSH per-connection server daemon (10.0.0.1:37290). May 15 00:37:40.252900 systemd-logind[1419]: Removed session 11. May 15 00:37:40.296426 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 37290 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:40.297795 sshd[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:40.301355 systemd-logind[1419]: New session 12 of user core. May 15 00:37:40.307856 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 00:37:40.457510 sshd[4012]: pam_unix(sshd:session): session closed for user core May 15 00:37:40.466124 systemd[1]: sshd@11-10.0.0.154:22-10.0.0.1:37290.service: Deactivated successfully. May 15 00:37:40.470486 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:37:40.471789 systemd-logind[1419]: Session 12 logged out. Waiting for processes to exit. May 15 00:37:40.486056 systemd[1]: Started sshd@12-10.0.0.154:22-10.0.0.1:37302.service - OpenSSH per-connection server daemon (10.0.0.1:37302). May 15 00:37:40.487005 systemd-logind[1419]: Removed session 12. May 15 00:37:40.522074 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 37302 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:40.523819 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:40.527731 systemd-logind[1419]: New session 13 of user core. May 15 00:37:40.536834 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 00:37:40.644311 sshd[4024]: pam_unix(sshd:session): session closed for user core May 15 00:37:40.647536 systemd[1]: sshd@12-10.0.0.154:22-10.0.0.1:37302.service: Deactivated successfully. May 15 00:37:40.649226 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:37:40.649812 systemd-logind[1419]: Session 13 logged out. Waiting for processes to exit. May 15 00:37:40.650580 systemd-logind[1419]: Removed session 13. May 15 00:37:40.766583 kubelet[2541]: E0515 00:37:40.766556 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:37:45.655086 systemd[1]: Started sshd@13-10.0.0.154:22-10.0.0.1:47500.service - OpenSSH per-connection server daemon (10.0.0.1:47500). May 15 00:37:45.691300 sshd[4041]: Accepted publickey for core from 10.0.0.1 port 47500 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:45.692721 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:45.696271 systemd-logind[1419]: New session 14 of user core. May 15 00:37:45.707816 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 00:37:45.814299 sshd[4041]: pam_unix(sshd:session): session closed for user core May 15 00:37:45.817795 systemd[1]: sshd@13-10.0.0.154:22-10.0.0.1:47500.service: Deactivated successfully. May 15 00:37:45.819466 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:37:45.820065 systemd-logind[1419]: Session 14 logged out. Waiting for processes to exit. May 15 00:37:45.820908 systemd-logind[1419]: Removed session 14. May 15 00:37:50.825310 systemd[1]: Started sshd@14-10.0.0.154:22-10.0.0.1:47512.service - OpenSSH per-connection server daemon (10.0.0.1:47512). May 15 00:37:50.861343 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 47512 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:50.862856 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:50.868841 systemd-logind[1419]: New session 15 of user core. May 15 00:37:50.883179 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 00:37:50.994772 sshd[4055]: pam_unix(sshd:session): session closed for user core May 15 00:37:51.005048 systemd[1]: sshd@14-10.0.0.154:22-10.0.0.1:47512.service: Deactivated successfully. May 15 00:37:51.006579 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:37:51.007823 systemd-logind[1419]: Session 15 logged out. Waiting for processes to exit. May 15 00:37:51.018121 systemd[1]: Started sshd@15-10.0.0.154:22-10.0.0.1:47516.service - OpenSSH per-connection server daemon (10.0.0.1:47516). May 15 00:37:51.022091 systemd-logind[1419]: Removed session 15. May 15 00:37:51.051526 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 47516 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:51.054450 sshd[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:51.058333 systemd-logind[1419]: New session 16 of user core. May 15 00:37:51.065828 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 00:37:51.354631 sshd[4070]: pam_unix(sshd:session): session closed for user core May 15 00:37:51.360958 systemd[1]: sshd@15-10.0.0.154:22-10.0.0.1:47516.service: Deactivated successfully. May 15 00:37:51.362291 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:37:51.364876 systemd-logind[1419]: Session 16 logged out. Waiting for processes to exit. May 15 00:37:51.372915 systemd[1]: Started sshd@16-10.0.0.154:22-10.0.0.1:47518.service - OpenSSH per-connection server daemon (10.0.0.1:47518). May 15 00:37:51.375014 systemd-logind[1419]: Removed session 16. May 15 00:37:51.410945 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 47518 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:51.412401 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:51.416096 systemd-logind[1419]: New session 17 of user core. May 15 00:37:51.426809 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 00:37:52.716876 sshd[4083]: pam_unix(sshd:session): session closed for user core May 15 00:37:52.731430 systemd[1]: sshd@16-10.0.0.154:22-10.0.0.1:47518.service: Deactivated successfully. May 15 00:37:52.733429 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:37:52.735260 systemd-logind[1419]: Session 17 logged out. Waiting for processes to exit. May 15 00:37:52.747970 systemd[1]: Started sshd@17-10.0.0.154:22-10.0.0.1:38698.service - OpenSSH per-connection server daemon (10.0.0.1:38698). May 15 00:37:52.749490 systemd-logind[1419]: Removed session 17. May 15 00:37:52.781728 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 38698 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:52.783287 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:52.789006 systemd-logind[1419]: New session 18 of user core. May 15 00:37:52.795853 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 00:37:53.008787 sshd[4105]: pam_unix(sshd:session): session closed for user core May 15 00:37:53.018243 systemd[1]: sshd@17-10.0.0.154:22-10.0.0.1:38698.service: Deactivated successfully. May 15 00:37:53.021354 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:37:53.023084 systemd-logind[1419]: Session 18 logged out. Waiting for processes to exit. May 15 00:37:53.033938 systemd[1]: Started sshd@18-10.0.0.154:22-10.0.0.1:38704.service - OpenSSH per-connection server daemon (10.0.0.1:38704). May 15 00:37:53.035106 systemd-logind[1419]: Removed session 18. May 15 00:37:53.064669 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 38704 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:53.066178 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:53.070626 systemd-logind[1419]: New session 19 of user core. May 15 00:37:53.078830 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 00:37:53.181875 sshd[4118]: pam_unix(sshd:session): session closed for user core May 15 00:37:53.185419 systemd[1]: sshd@18-10.0.0.154:22-10.0.0.1:38704.service: Deactivated successfully. May 15 00:37:53.187076 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:37:53.187932 systemd-logind[1419]: Session 19 logged out. Waiting for processes to exit. May 15 00:37:53.188787 systemd-logind[1419]: Removed session 19. May 15 00:37:58.197217 systemd[1]: Started sshd@19-10.0.0.154:22-10.0.0.1:38708.service - OpenSSH per-connection server daemon (10.0.0.1:38708). May 15 00:37:58.231558 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 38708 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:37:58.232813 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:37:58.236168 systemd-logind[1419]: New session 20 of user core. May 15 00:37:58.242834 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 00:37:58.348900 sshd[4137]: pam_unix(sshd:session): session closed for user core May 15 00:37:58.352037 systemd[1]: sshd@19-10.0.0.154:22-10.0.0.1:38708.service: Deactivated successfully. May 15 00:37:58.355879 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:37:58.356916 systemd-logind[1419]: Session 20 logged out. Waiting for processes to exit. May 15 00:37:58.357964 systemd-logind[1419]: Removed session 20. May 15 00:38:03.360477 systemd[1]: Started sshd@20-10.0.0.154:22-10.0.0.1:54668.service - OpenSSH per-connection server daemon (10.0.0.1:54668). May 15 00:38:03.398358 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 54668 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:38:03.399750 sshd[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:38:03.404408 systemd-logind[1419]: New session 21 of user core. May 15 00:38:03.414867 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 00:38:03.523827 sshd[4151]: pam_unix(sshd:session): session closed for user core May 15 00:38:03.527340 systemd[1]: sshd@20-10.0.0.154:22-10.0.0.1:54668.service: Deactivated successfully. May 15 00:38:03.531043 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:38:03.531803 systemd-logind[1419]: Session 21 logged out. Waiting for processes to exit. May 15 00:38:03.532816 systemd-logind[1419]: Removed session 21. May 15 00:38:06.594809 kubelet[2541]: E0515 00:38:06.594778 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:08.534184 systemd[1]: Started sshd@21-10.0.0.154:22-10.0.0.1:54672.service - OpenSSH per-connection server daemon (10.0.0.1:54672). May 15 00:38:08.568768 sshd[4165]: Accepted publickey for core from 10.0.0.1 port 54672 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:38:08.569974 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:38:08.573321 systemd-logind[1419]: New session 22 of user core. May 15 00:38:08.581793 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 00:38:08.685309 sshd[4165]: pam_unix(sshd:session): session closed for user core May 15 00:38:08.694029 systemd[1]: sshd@21-10.0.0.154:22-10.0.0.1:54672.service: Deactivated successfully. May 15 00:38:08.695626 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:38:08.697143 systemd-logind[1419]: Session 22 logged out. Waiting for processes to exit. May 15 00:38:08.709115 systemd[1]: Started sshd@22-10.0.0.154:22-10.0.0.1:54688.service - OpenSSH per-connection server daemon (10.0.0.1:54688). May 15 00:38:08.710196 systemd-logind[1419]: Removed session 22. May 15 00:38:08.739766 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 54688 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:38:08.740927 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:38:08.744507 systemd-logind[1419]: New session 23 of user core. May 15 00:38:08.750812 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 00:38:11.020577 containerd[1442]: time="2025-05-15T00:38:11.019824844Z" level=info msg="StopContainer for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" with timeout 30 (s)" May 15 00:38:11.020577 containerd[1442]: time="2025-05-15T00:38:11.020488986Z" level=info msg="Stop container \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" with signal terminated" May 15 00:38:11.032870 systemd[1]: cri-containerd-6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a.scope: Deactivated successfully. May 15 00:38:11.054301 containerd[1442]: time="2025-05-15T00:38:11.054263765Z" level=info msg="StopContainer for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" with timeout 2 (s)" May 15 00:38:11.054983 containerd[1442]: time="2025-05-15T00:38:11.054955946Z" level=info msg="Stop container \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" with signal terminated" May 15 00:38:11.063742 systemd-networkd[1383]: lxc_health: Link DOWN May 15 00:38:11.063750 systemd-networkd[1383]: lxc_health: Lost carrier May 15 00:38:11.067045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a-rootfs.mount: Deactivated successfully. May 15 00:38:11.075946 containerd[1442]: time="2025-05-15T00:38:11.075884187Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:38:11.081943 containerd[1442]: time="2025-05-15T00:38:11.081872628Z" level=info msg="shim disconnected" id=6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a namespace=k8s.io May 15 00:38:11.082208 containerd[1442]: time="2025-05-15T00:38:11.082058823Z" level=warning msg="cleaning up after shim disconnected" id=6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a namespace=k8s.io May 15 00:38:11.082208 containerd[1442]: time="2025-05-15T00:38:11.082075862Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:11.089489 systemd[1]: cri-containerd-d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a.scope: Deactivated successfully. May 15 00:38:11.089922 systemd[1]: cri-containerd-d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a.scope: Consumed 6.530s CPU time. May 15 00:38:11.134626 containerd[1442]: time="2025-05-15T00:38:11.134558781Z" level=info msg="StopContainer for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" returns successfully" May 15 00:38:11.135358 containerd[1442]: time="2025-05-15T00:38:11.135242083Z" level=info msg="StopPodSandbox for \"916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78\"" May 15 00:38:11.135358 containerd[1442]: time="2025-05-15T00:38:11.135282802Z" level=info msg="Container to stop \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:38:11.137567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a-rootfs.mount: Deactivated successfully. May 15 00:38:11.137698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78-shm.mount: Deactivated successfully. May 15 00:38:11.142716 containerd[1442]: time="2025-05-15T00:38:11.142638525Z" level=info msg="shim disconnected" id=d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a namespace=k8s.io May 15 00:38:11.142716 containerd[1442]: time="2025-05-15T00:38:11.142710643Z" level=warning msg="cleaning up after shim disconnected" id=d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a namespace=k8s.io May 15 00:38:11.142716 containerd[1442]: time="2025-05-15T00:38:11.142721283Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:11.142905 systemd[1]: cri-containerd-916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78.scope: Deactivated successfully. May 15 00:38:11.159831 containerd[1442]: time="2025-05-15T00:38:11.159718069Z" level=info msg="StopContainer for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" returns successfully" May 15 00:38:11.160315 containerd[1442]: time="2025-05-15T00:38:11.160269895Z" level=info msg="StopPodSandbox for \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\"" May 15 00:38:11.160315 containerd[1442]: time="2025-05-15T00:38:11.160301494Z" level=info msg="Container to stop \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:38:11.160315 containerd[1442]: time="2025-05-15T00:38:11.160313893Z" level=info msg="Container to stop \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:38:11.160820 containerd[1442]: time="2025-05-15T00:38:11.160325693Z" level=info msg="Container to stop \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:38:11.160820 containerd[1442]: time="2025-05-15T00:38:11.160335853Z" level=info msg="Container to stop \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:38:11.160820 containerd[1442]: time="2025-05-15T00:38:11.160346053Z" level=info msg="Container to stop \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:38:11.162141 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006-shm.mount: Deactivated successfully. May 15 00:38:11.169726 systemd[1]: cri-containerd-cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006.scope: Deactivated successfully. May 15 00:38:11.174600 containerd[1442]: time="2025-05-15T00:38:11.174434436Z" level=info msg="shim disconnected" id=916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78 namespace=k8s.io May 15 00:38:11.174600 containerd[1442]: time="2025-05-15T00:38:11.174498955Z" level=warning msg="cleaning up after shim disconnected" id=916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78 namespace=k8s.io May 15 00:38:11.174600 containerd[1442]: time="2025-05-15T00:38:11.174507474Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:11.191767 containerd[1442]: time="2025-05-15T00:38:11.191729055Z" level=info msg="TearDown network for sandbox \"916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78\" successfully" May 15 00:38:11.191767 containerd[1442]: time="2025-05-15T00:38:11.191763654Z" level=info msg="StopPodSandbox for \"916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78\" returns successfully" May 15 00:38:11.194492 containerd[1442]: time="2025-05-15T00:38:11.194432263Z" level=info msg="shim disconnected" id=cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006 namespace=k8s.io May 15 00:38:11.194492 containerd[1442]: time="2025-05-15T00:38:11.194481581Z" level=warning msg="cleaning up after shim disconnected" id=cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006 namespace=k8s.io May 15 00:38:11.194650 containerd[1442]: time="2025-05-15T00:38:11.194492301Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:11.221315 containerd[1442]: time="2025-05-15T00:38:11.221002473Z" level=info msg="TearDown network for sandbox \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" successfully" May 15 00:38:11.221315 containerd[1442]: time="2025-05-15T00:38:11.221035112Z" level=info msg="StopPodSandbox for \"cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006\" returns successfully" May 15 00:38:11.286114 kubelet[2541]: I0515 00:38:11.285909 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-lib-modules\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286114 kubelet[2541]: I0515 00:38:11.285947 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-kernel\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286114 kubelet[2541]: I0515 00:38:11.285965 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-run\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286114 kubelet[2541]: I0515 00:38:11.285982 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-etc-cni-netd\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286114 kubelet[2541]: I0515 00:38:11.286009 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a8ef949-5503-4273-ad53-3492fd0a5b7a-cilium-config-path\") pod \"4a8ef949-5503-4273-ad53-3492fd0a5b7a\" (UID: \"4a8ef949-5503-4273-ad53-3492fd0a5b7a\") " May 15 00:38:11.286114 kubelet[2541]: I0515 00:38:11.286024 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-bpf-maps\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286846 kubelet[2541]: I0515 00:38:11.286038 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-cgroup\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286846 kubelet[2541]: I0515 00:38:11.286051 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-net\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286846 kubelet[2541]: I0515 00:38:11.286068 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpwq8\" (UniqueName: \"kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-kube-api-access-qpwq8\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286846 kubelet[2541]: I0515 00:38:11.286084 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-config-path\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286846 kubelet[2541]: I0515 00:38:11.286099 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-hostproc\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.286846 kubelet[2541]: I0515 00:38:11.286118 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9slrp\" (UniqueName: \"kubernetes.io/projected/4a8ef949-5503-4273-ad53-3492fd0a5b7a-kube-api-access-9slrp\") pod \"4a8ef949-5503-4273-ad53-3492fd0a5b7a\" (UID: \"4a8ef949-5503-4273-ad53-3492fd0a5b7a\") " May 15 00:38:11.287110 kubelet[2541]: I0515 00:38:11.286134 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-hubble-tls\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.287110 kubelet[2541]: I0515 00:38:11.286149 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-xtables-lock\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.287110 kubelet[2541]: I0515 00:38:11.286162 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cni-path\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.287110 kubelet[2541]: I0515 00:38:11.286179 2541 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65e55078-37b8-4336-8cb3-2a90d99bbb85-clustermesh-secrets\") pod \"65e55078-37b8-4336-8cb3-2a90d99bbb85\" (UID: \"65e55078-37b8-4336-8cb3-2a90d99bbb85\") " May 15 00:38:11.289494 kubelet[2541]: I0515 00:38:11.289249 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.292586 kubelet[2541]: I0515 00:38:11.292321 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.292586 kubelet[2541]: I0515 00:38:11.292407 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.292586 kubelet[2541]: I0515 00:38:11.292429 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.293649 kubelet[2541]: I0515 00:38:11.292858 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8ef949-5503-4273-ad53-3492fd0a5b7a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4a8ef949-5503-4273-ad53-3492fd0a5b7a" (UID: "4a8ef949-5503-4273-ad53-3492fd0a5b7a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:38:11.293649 kubelet[2541]: I0515 00:38:11.292910 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.293649 kubelet[2541]: I0515 00:38:11.292928 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.294735 kubelet[2541]: I0515 00:38:11.294680 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:38:11.294735 kubelet[2541]: I0515 00:38:11.294727 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.294836 kubelet[2541]: I0515 00:38:11.294745 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.294836 kubelet[2541]: I0515 00:38:11.294805 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8ef949-5503-4273-ad53-3492fd0a5b7a-kube-api-access-9slrp" (OuterVolumeSpecName: "kube-api-access-9slrp") pod "4a8ef949-5503-4273-ad53-3492fd0a5b7a" (UID: "4a8ef949-5503-4273-ad53-3492fd0a5b7a"). InnerVolumeSpecName "kube-api-access-9slrp". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:38:11.294888 kubelet[2541]: I0515 00:38:11.294845 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-hostproc" (OuterVolumeSpecName: "hostproc") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.294888 kubelet[2541]: I0515 00:38:11.294865 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cni-path" (OuterVolumeSpecName: "cni-path") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:38:11.296140 kubelet[2541]: I0515 00:38:11.296086 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-kube-api-access-qpwq8" (OuterVolumeSpecName: "kube-api-access-qpwq8") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "kube-api-access-qpwq8". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:38:11.297093 kubelet[2541]: I0515 00:38:11.296974 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:38:11.297093 kubelet[2541]: I0515 00:38:11.297048 2541 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65e55078-37b8-4336-8cb3-2a90d99bbb85-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "65e55078-37b8-4336-8cb3-2a90d99bbb85" (UID: "65e55078-37b8-4336-8cb3-2a90d99bbb85"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387080 2541 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4a8ef949-5503-4273-ad53-3492fd0a5b7a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387109 2541 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387118 2541 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387148 2541 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387156 2541 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387166 2541 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387173 2541 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387266 kubelet[2541]: I0515 00:38:11.387182 2541 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387191 2541 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qpwq8\" (UniqueName: \"kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-kube-api-access-qpwq8\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387198 2541 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/65e55078-37b8-4336-8cb3-2a90d99bbb85-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387207 2541 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387215 2541 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387223 2541 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9slrp\" (UniqueName: \"kubernetes.io/projected/4a8ef949-5503-4273-ad53-3492fd0a5b7a-kube-api-access-9slrp\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387233 2541 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/65e55078-37b8-4336-8cb3-2a90d99bbb85-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387241 2541 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65e55078-37b8-4336-8cb3-2a90d99bbb85-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.387522 kubelet[2541]: I0515 00:38:11.387248 2541 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/65e55078-37b8-4336-8cb3-2a90d99bbb85-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 00:38:11.595084 kubelet[2541]: E0515 00:38:11.594983 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:11.602512 systemd[1]: Removed slice kubepods-besteffort-pod4a8ef949_5503_4273_ad53_3492fd0a5b7a.slice - libcontainer container kubepods-besteffort-pod4a8ef949_5503_4273_ad53_3492fd0a5b7a.slice. May 15 00:38:11.604355 systemd[1]: Removed slice kubepods-burstable-pod65e55078_37b8_4336_8cb3_2a90d99bbb85.slice - libcontainer container kubepods-burstable-pod65e55078_37b8_4336_8cb3_2a90d99bbb85.slice. May 15 00:38:11.604553 systemd[1]: kubepods-burstable-pod65e55078_37b8_4336_8cb3_2a90d99bbb85.slice: Consumed 6.786s CPU time. May 15 00:38:11.826899 kubelet[2541]: I0515 00:38:11.826658 2541 scope.go:117] "RemoveContainer" containerID="6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a" May 15 00:38:11.828346 containerd[1442]: time="2025-05-15T00:38:11.828302140Z" level=info msg="RemoveContainer for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\"" May 15 00:38:11.853328 containerd[1442]: time="2025-05-15T00:38:11.852373497Z" level=info msg="RemoveContainer for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" returns successfully" May 15 00:38:11.853328 containerd[1442]: time="2025-05-15T00:38:11.852942402Z" level=error msg="ContainerStatus for \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\": not found" May 15 00:38:11.853497 kubelet[2541]: I0515 00:38:11.852652 2541 scope.go:117] "RemoveContainer" containerID="6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a" May 15 00:38:11.864625 kubelet[2541]: E0515 00:38:11.864553 2541 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\": not found" containerID="6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a" May 15 00:38:11.864768 kubelet[2541]: I0515 00:38:11.864605 2541 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a"} err="failed to get container status \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6de2d8bb39470fd6f359cc2cc44cbd21bb9ace0fa1a9f6b7b844f2a1b75cac0a\": not found" May 15 00:38:11.864768 kubelet[2541]: I0515 00:38:11.864709 2541 scope.go:117] "RemoveContainer" containerID="d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a" May 15 00:38:11.865965 containerd[1442]: time="2025-05-15T00:38:11.865935775Z" level=info msg="RemoveContainer for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\"" May 15 00:38:11.868528 containerd[1442]: time="2025-05-15T00:38:11.868452268Z" level=info msg="RemoveContainer for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" returns successfully" May 15 00:38:11.868670 kubelet[2541]: I0515 00:38:11.868611 2541 scope.go:117] "RemoveContainer" containerID="6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d" May 15 00:38:11.869748 containerd[1442]: time="2025-05-15T00:38:11.869705114Z" level=info msg="RemoveContainer for \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\"" May 15 00:38:11.871928 containerd[1442]: time="2025-05-15T00:38:11.871891856Z" level=info msg="RemoveContainer for \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\" returns successfully" May 15 00:38:11.872119 kubelet[2541]: I0515 00:38:11.872035 2541 scope.go:117] "RemoveContainer" containerID="b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589" May 15 00:38:11.873127 containerd[1442]: time="2025-05-15T00:38:11.873092944Z" level=info msg="RemoveContainer for \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\"" May 15 00:38:11.875269 containerd[1442]: time="2025-05-15T00:38:11.875242007Z" level=info msg="RemoveContainer for \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\" returns successfully" May 15 00:38:11.875456 kubelet[2541]: I0515 00:38:11.875421 2541 scope.go:117] "RemoveContainer" containerID="4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489" May 15 00:38:11.876696 containerd[1442]: time="2025-05-15T00:38:11.876583171Z" level=info msg="RemoveContainer for \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\"" May 15 00:38:11.878711 containerd[1442]: time="2025-05-15T00:38:11.878641756Z" level=info msg="RemoveContainer for \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\" returns successfully" May 15 00:38:11.878885 kubelet[2541]: I0515 00:38:11.878846 2541 scope.go:117] "RemoveContainer" containerID="a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222" May 15 00:38:11.879963 containerd[1442]: time="2025-05-15T00:38:11.879907162Z" level=info msg="RemoveContainer for \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\"" May 15 00:38:11.882138 containerd[1442]: time="2025-05-15T00:38:11.882106983Z" level=info msg="RemoveContainer for \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\" returns successfully" May 15 00:38:11.882270 kubelet[2541]: I0515 00:38:11.882247 2541 scope.go:117] "RemoveContainer" containerID="d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a" May 15 00:38:11.882513 containerd[1442]: time="2025-05-15T00:38:11.882425935Z" level=error msg="ContainerStatus for \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\": not found" May 15 00:38:11.882607 kubelet[2541]: E0515 00:38:11.882586 2541 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\": not found" containerID="d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a" May 15 00:38:11.882642 kubelet[2541]: I0515 00:38:11.882620 2541 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a"} err="failed to get container status \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7ffe5b9d03f6fb63d43690cae4d8788026320d64e3c6b989e26ca233ed5b59a\": not found" May 15 00:38:11.882682 kubelet[2541]: I0515 00:38:11.882644 2541 scope.go:117] "RemoveContainer" containerID="6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d" May 15 00:38:11.882879 containerd[1442]: time="2025-05-15T00:38:11.882852883Z" level=error msg="ContainerStatus for \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\": not found" May 15 00:38:11.883079 kubelet[2541]: E0515 00:38:11.883041 2541 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\": not found" containerID="6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d" May 15 00:38:11.883079 kubelet[2541]: I0515 00:38:11.883064 2541 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d"} err="failed to get container status \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e20ac6846debc631bd89694714f5563ebf9ba46987961e40516f4e22ae5c87d\": not found" May 15 00:38:11.883170 kubelet[2541]: I0515 00:38:11.883077 2541 scope.go:117] "RemoveContainer" containerID="b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589" May 15 00:38:11.883366 containerd[1442]: time="2025-05-15T00:38:11.883300192Z" level=error msg="ContainerStatus for \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\": not found" May 15 00:38:11.883435 kubelet[2541]: E0515 00:38:11.883405 2541 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\": not found" containerID="b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589" May 15 00:38:11.883486 kubelet[2541]: I0515 00:38:11.883445 2541 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589"} err="failed to get container status \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\": rpc error: code = NotFound desc = an error occurred when try to find container \"b25eaae78652bc7a017e1609574fdb88c290a609a6196134666a65f0c2857589\": not found" May 15 00:38:11.883486 kubelet[2541]: I0515 00:38:11.883465 2541 scope.go:117] "RemoveContainer" containerID="4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489" May 15 00:38:11.883696 containerd[1442]: time="2025-05-15T00:38:11.883652462Z" level=error msg="ContainerStatus for \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\": not found" May 15 00:38:11.883800 kubelet[2541]: E0515 00:38:11.883782 2541 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\": not found" containerID="4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489" May 15 00:38:11.883844 kubelet[2541]: I0515 00:38:11.883804 2541 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489"} err="failed to get container status \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ebf0228f2bf326962865129e5d8e0d9df49cc73e95aa684e366a083484c7489\": not found" May 15 00:38:11.883844 kubelet[2541]: I0515 00:38:11.883833 2541 scope.go:117] "RemoveContainer" containerID="a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222" May 15 00:38:11.884063 containerd[1442]: time="2025-05-15T00:38:11.884001213Z" level=error msg="ContainerStatus for \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\": not found" May 15 00:38:11.884130 kubelet[2541]: E0515 00:38:11.884112 2541 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\": not found" containerID="a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222" May 15 00:38:11.884159 kubelet[2541]: I0515 00:38:11.884133 2541 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222"} err="failed to get container status \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\": rpc error: code = NotFound desc = an error occurred when try to find container \"a68d0a030b3838f9ff3017af6df1dae00b97fcf45c20b283d89f9b699fb21222\": not found" May 15 00:38:12.030708 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-916380bb364a62a59f10f73d86a0953deabe3c2d489d97301ffb97f620b3aa78-rootfs.mount: Deactivated successfully. May 15 00:38:12.030813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc55894ebf92741048f5191e23b2bd2a7fc3f4ae8ba4e7c3c3f0fc94c74e1006-rootfs.mount: Deactivated successfully. May 15 00:38:12.030865 systemd[1]: var-lib-kubelet-pods-4a8ef949\x2d5503\x2d4273\x2dad53\x2d3492fd0a5b7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9slrp.mount: Deactivated successfully. May 15 00:38:12.030921 systemd[1]: var-lib-kubelet-pods-65e55078\x2d37b8\x2d4336\x2d8cb3\x2d2a90d99bbb85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqpwq8.mount: Deactivated successfully. May 15 00:38:12.030976 systemd[1]: var-lib-kubelet-pods-65e55078\x2d37b8\x2d4336\x2d8cb3\x2d2a90d99bbb85-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 00:38:12.031035 systemd[1]: var-lib-kubelet-pods-65e55078\x2d37b8\x2d4336\x2d8cb3\x2d2a90d99bbb85-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 00:38:12.987538 sshd[4179]: pam_unix(sshd:session): session closed for user core May 15 00:38:13.000475 systemd[1]: sshd@22-10.0.0.154:22-10.0.0.1:54688.service: Deactivated successfully. May 15 00:38:13.002253 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:38:13.002459 systemd[1]: session-23.scope: Consumed 1.616s CPU time. May 15 00:38:13.003610 systemd-logind[1419]: Session 23 logged out. Waiting for processes to exit. May 15 00:38:13.013942 systemd[1]: Started sshd@23-10.0.0.154:22-10.0.0.1:56384.service - OpenSSH per-connection server daemon (10.0.0.1:56384). May 15 00:38:13.015304 systemd-logind[1419]: Removed session 23. May 15 00:38:13.045959 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 56384 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:38:13.047570 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:38:13.051816 systemd-logind[1419]: New session 24 of user core. May 15 00:38:13.060897 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 00:38:13.597455 kubelet[2541]: I0515 00:38:13.596631 2541 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8ef949-5503-4273-ad53-3492fd0a5b7a" path="/var/lib/kubelet/pods/4a8ef949-5503-4273-ad53-3492fd0a5b7a/volumes" May 15 00:38:13.597455 kubelet[2541]: I0515 00:38:13.597022 2541 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" path="/var/lib/kubelet/pods/65e55078-37b8-4336-8cb3-2a90d99bbb85/volumes" May 15 00:38:13.897269 sshd[4340]: pam_unix(sshd:session): session closed for user core May 15 00:38:13.902035 kubelet[2541]: I0515 00:38:13.901698 2541 topology_manager.go:215] "Topology Admit Handler" podUID="a398a01e-53a3-46b4-abc7-bfd60b9477df" podNamespace="kube-system" podName="cilium-v2tq2" May 15 00:38:13.902035 kubelet[2541]: E0515 00:38:13.901753 2541 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" containerName="apply-sysctl-overwrites" May 15 00:38:13.902035 kubelet[2541]: E0515 00:38:13.901766 2541 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4a8ef949-5503-4273-ad53-3492fd0a5b7a" containerName="cilium-operator" May 15 00:38:13.902035 kubelet[2541]: E0515 00:38:13.901773 2541 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" containerName="mount-bpf-fs" May 15 00:38:13.902035 kubelet[2541]: E0515 00:38:13.901780 2541 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" containerName="cilium-agent" May 15 00:38:13.902035 kubelet[2541]: E0515 00:38:13.901787 2541 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" containerName="mount-cgroup" May 15 00:38:13.902035 kubelet[2541]: E0515 00:38:13.901792 2541 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" containerName="clean-cilium-state" May 15 00:38:13.902035 kubelet[2541]: I0515 00:38:13.901812 2541 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8ef949-5503-4273-ad53-3492fd0a5b7a" containerName="cilium-operator" May 15 00:38:13.902035 kubelet[2541]: I0515 00:38:13.901819 2541 memory_manager.go:354] "RemoveStaleState removing state" podUID="65e55078-37b8-4336-8cb3-2a90d99bbb85" containerName="cilium-agent" May 15 00:38:13.907701 systemd[1]: sshd@23-10.0.0.154:22-10.0.0.1:56384.service: Deactivated successfully. May 15 00:38:13.911484 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:38:13.915326 systemd-logind[1419]: Session 24 logged out. Waiting for processes to exit. May 15 00:38:13.928821 systemd[1]: Started sshd@24-10.0.0.154:22-10.0.0.1:56396.service - OpenSSH per-connection server daemon (10.0.0.1:56396). May 15 00:38:13.937829 systemd-logind[1419]: Removed session 24. May 15 00:38:13.941113 systemd[1]: Created slice kubepods-burstable-poda398a01e_53a3_46b4_abc7_bfd60b9477df.slice - libcontainer container kubepods-burstable-poda398a01e_53a3_46b4_abc7_bfd60b9477df.slice. May 15 00:38:13.974782 sshd[4355]: Accepted publickey for core from 10.0.0.1 port 56396 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:38:13.977435 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:38:13.986940 systemd-logind[1419]: New session 25 of user core. May 15 00:38:13.997867 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 00:38:14.005820 kubelet[2541]: I0515 00:38:14.005780 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a398a01e-53a3-46b4-abc7-bfd60b9477df-clustermesh-secrets\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005820 kubelet[2541]: I0515 00:38:14.005823 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-host-proc-sys-kernel\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005955 kubelet[2541]: I0515 00:38:14.005843 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-hostproc\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005955 kubelet[2541]: I0515 00:38:14.005861 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a398a01e-53a3-46b4-abc7-bfd60b9477df-cilium-ipsec-secrets\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005955 kubelet[2541]: I0515 00:38:14.005879 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-bpf-maps\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005955 kubelet[2541]: I0515 00:38:14.005898 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-host-proc-sys-net\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005955 kubelet[2541]: I0515 00:38:14.005914 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a398a01e-53a3-46b4-abc7-bfd60b9477df-hubble-tls\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.005955 kubelet[2541]: I0515 00:38:14.005950 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-cilium-cgroup\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006110 kubelet[2541]: I0515 00:38:14.005966 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-etc-cni-netd\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006110 kubelet[2541]: I0515 00:38:14.006000 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-lib-modules\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006110 kubelet[2541]: I0515 00:38:14.006046 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrnx\" (UniqueName: \"kubernetes.io/projected/a398a01e-53a3-46b4-abc7-bfd60b9477df-kube-api-access-hjrnx\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006110 kubelet[2541]: I0515 00:38:14.006088 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-cilium-run\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006110 kubelet[2541]: I0515 00:38:14.006106 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-cni-path\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006408 kubelet[2541]: I0515 00:38:14.006126 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a398a01e-53a3-46b4-abc7-bfd60b9477df-cilium-config-path\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.006408 kubelet[2541]: I0515 00:38:14.006198 2541 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a398a01e-53a3-46b4-abc7-bfd60b9477df-xtables-lock\") pod \"cilium-v2tq2\" (UID: \"a398a01e-53a3-46b4-abc7-bfd60b9477df\") " pod="kube-system/cilium-v2tq2" May 15 00:38:14.050183 sshd[4355]: pam_unix(sshd:session): session closed for user core May 15 00:38:14.060352 systemd[1]: sshd@24-10.0.0.154:22-10.0.0.1:56396.service: Deactivated successfully. May 15 00:38:14.062185 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:38:14.065370 systemd-logind[1419]: Session 25 logged out. Waiting for processes to exit. May 15 00:38:14.078313 systemd[1]: Started sshd@25-10.0.0.154:22-10.0.0.1:56398.service - OpenSSH per-connection server daemon (10.0.0.1:56398). May 15 00:38:14.079621 systemd-logind[1419]: Removed session 25. May 15 00:38:14.110607 sshd[4363]: Accepted publickey for core from 10.0.0.1 port 56398 ssh2: RSA SHA256:tLUu9qOjvvX5QiV2AFoOemAr3R8UMEWwXiUNOTbRKos May 15 00:38:14.113410 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 00:38:14.126610 systemd-logind[1419]: New session 26 of user core. May 15 00:38:14.133830 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 00:38:14.251535 kubelet[2541]: E0515 00:38:14.251500 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:14.252230 containerd[1442]: time="2025-05-15T00:38:14.252168072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2tq2,Uid:a398a01e-53a3-46b4-abc7-bfd60b9477df,Namespace:kube-system,Attempt:0,}" May 15 00:38:14.280436 containerd[1442]: time="2025-05-15T00:38:14.279981513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:38:14.280436 containerd[1442]: time="2025-05-15T00:38:14.280151030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:38:14.280436 containerd[1442]: time="2025-05-15T00:38:14.280176789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:38:14.280436 containerd[1442]: time="2025-05-15T00:38:14.280277827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:38:14.300857 systemd[1]: Started cri-containerd-70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767.scope - libcontainer container 70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767. May 15 00:38:14.326241 containerd[1442]: time="2025-05-15T00:38:14.326192998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-v2tq2,Uid:a398a01e-53a3-46b4-abc7-bfd60b9477df,Namespace:kube-system,Attempt:0,} returns sandbox id \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\"" May 15 00:38:14.326976 kubelet[2541]: E0515 00:38:14.326893 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:14.330804 containerd[1442]: time="2025-05-15T00:38:14.330708141Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 00:38:14.448800 containerd[1442]: time="2025-05-15T00:38:14.448734758Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b\"" May 15 00:38:14.449915 containerd[1442]: time="2025-05-15T00:38:14.449232668Z" level=info msg="StartContainer for \"3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b\"" May 15 00:38:14.485916 systemd[1]: Started cri-containerd-3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b.scope - libcontainer container 3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b. May 15 00:38:14.515181 containerd[1442]: time="2025-05-15T00:38:14.515000411Z" level=info msg="StartContainer for \"3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b\" returns successfully" May 15 00:38:14.538958 systemd[1]: cri-containerd-3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b.scope: Deactivated successfully. May 15 00:38:14.576483 containerd[1442]: time="2025-05-15T00:38:14.576329250Z" level=info msg="shim disconnected" id=3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b namespace=k8s.io May 15 00:38:14.576483 containerd[1442]: time="2025-05-15T00:38:14.576425408Z" level=warning msg="cleaning up after shim disconnected" id=3966db259aca9c5a7beeeef338cde8f07ad811e25d3819fda0e2b7186204022b namespace=k8s.io May 15 00:38:14.576483 containerd[1442]: time="2025-05-15T00:38:14.576434888Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:14.839623 kubelet[2541]: E0515 00:38:14.839514 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:14.841483 containerd[1442]: time="2025-05-15T00:38:14.841446659Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 00:38:14.855816 containerd[1442]: time="2025-05-15T00:38:14.855617554Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd\"" May 15 00:38:14.856943 containerd[1442]: time="2025-05-15T00:38:14.856908246Z" level=info msg="StartContainer for \"111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd\"" May 15 00:38:14.885025 systemd[1]: Started cri-containerd-111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd.scope - libcontainer container 111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd. May 15 00:38:14.910886 containerd[1442]: time="2025-05-15T00:38:14.910755686Z" level=info msg="StartContainer for \"111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd\" returns successfully" May 15 00:38:14.921535 systemd[1]: cri-containerd-111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd.scope: Deactivated successfully. May 15 00:38:14.950632 containerd[1442]: time="2025-05-15T00:38:14.950553709Z" level=info msg="shim disconnected" id=111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd namespace=k8s.io May 15 00:38:14.950632 containerd[1442]: time="2025-05-15T00:38:14.950612628Z" level=warning msg="cleaning up after shim disconnected" id=111afa6a1306e8a501ccc1654d50ecba1a75a0693c93e9a2afddb91a4efd92bd namespace=k8s.io May 15 00:38:14.950632 containerd[1442]: time="2025-05-15T00:38:14.950623348Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:15.595031 kubelet[2541]: E0515 00:38:15.594649 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:15.696795 kubelet[2541]: E0515 00:38:15.696722 2541 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 00:38:15.844973 kubelet[2541]: E0515 00:38:15.844862 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:15.848643 containerd[1442]: time="2025-05-15T00:38:15.848388884Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 00:38:15.867187 containerd[1442]: time="2025-05-15T00:38:15.867142471Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb\"" May 15 00:38:15.869064 containerd[1442]: time="2025-05-15T00:38:15.868929715Z" level=info msg="StartContainer for \"7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb\"" May 15 00:38:15.899859 systemd[1]: Started cri-containerd-7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb.scope - libcontainer container 7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb. May 15 00:38:15.924395 systemd[1]: cri-containerd-7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb.scope: Deactivated successfully. May 15 00:38:15.925967 containerd[1442]: time="2025-05-15T00:38:15.925931579Z" level=info msg="StartContainer for \"7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb\" returns successfully" May 15 00:38:15.949792 containerd[1442]: time="2025-05-15T00:38:15.949669466Z" level=info msg="shim disconnected" id=7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb namespace=k8s.io May 15 00:38:15.949792 containerd[1442]: time="2025-05-15T00:38:15.949724705Z" level=warning msg="cleaning up after shim disconnected" id=7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb namespace=k8s.io May 15 00:38:15.949792 containerd[1442]: time="2025-05-15T00:38:15.949735065Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:16.112222 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f1512258de80f7777e34a654c1d921952659d3e8464eaa62a984543574204eb-rootfs.mount: Deactivated successfully. May 15 00:38:16.847642 kubelet[2541]: E0515 00:38:16.847582 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:16.849901 containerd[1442]: time="2025-05-15T00:38:16.849856254Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 00:38:16.861553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410955208.mount: Deactivated successfully. May 15 00:38:16.863278 containerd[1442]: time="2025-05-15T00:38:16.863236568Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913\"" May 15 00:38:16.864706 containerd[1442]: time="2025-05-15T00:38:16.864258309Z" level=info msg="StartContainer for \"0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913\"" May 15 00:38:16.891818 systemd[1]: Started cri-containerd-0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913.scope - libcontainer container 0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913. May 15 00:38:16.908222 systemd[1]: cri-containerd-0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913.scope: Deactivated successfully. May 15 00:38:16.910675 containerd[1442]: time="2025-05-15T00:38:16.910630218Z" level=info msg="StartContainer for \"0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913\" returns successfully" May 15 00:38:16.927003 containerd[1442]: time="2025-05-15T00:38:16.926951078Z" level=info msg="shim disconnected" id=0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913 namespace=k8s.io May 15 00:38:16.927307 containerd[1442]: time="2025-05-15T00:38:16.927177354Z" level=warning msg="cleaning up after shim disconnected" id=0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913 namespace=k8s.io May 15 00:38:16.927307 containerd[1442]: time="2025-05-15T00:38:16.927193514Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 00:38:16.951706 kubelet[2541]: I0515 00:38:16.951314 2541 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T00:38:16Z","lastTransitionTime":"2025-05-15T00:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 00:38:17.112250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0978ef5a7716980fbe3d60756ff3b93f585bd69c967ee90acfb5cd17916b0913-rootfs.mount: Deactivated successfully. May 15 00:38:17.852237 kubelet[2541]: E0515 00:38:17.852066 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:17.861831 containerd[1442]: time="2025-05-15T00:38:17.861772328Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 00:38:17.875371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1859614714.mount: Deactivated successfully. May 15 00:38:17.878304 containerd[1442]: time="2025-05-15T00:38:17.878262170Z" level=info msg="CreateContainer within sandbox \"70f80cff46b0e711f41a2b5f2226056b631e6e027f8e88021e8319a090cb5767\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"937d47327c0992f80612e39de82a00b2474375115dbdaddd68f491e34f8f73e8\"" May 15 00:38:17.878890 containerd[1442]: time="2025-05-15T00:38:17.878855280Z" level=info msg="StartContainer for \"937d47327c0992f80612e39de82a00b2474375115dbdaddd68f491e34f8f73e8\"" May 15 00:38:17.905551 systemd[1]: Started cri-containerd-937d47327c0992f80612e39de82a00b2474375115dbdaddd68f491e34f8f73e8.scope - libcontainer container 937d47327c0992f80612e39de82a00b2474375115dbdaddd68f491e34f8f73e8. May 15 00:38:17.926826 containerd[1442]: time="2025-05-15T00:38:17.926791152Z" level=info msg="StartContainer for \"937d47327c0992f80612e39de82a00b2474375115dbdaddd68f491e34f8f73e8\" returns successfully" May 15 00:38:18.193688 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 00:38:18.859156 kubelet[2541]: E0515 00:38:18.857264 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:18.874587 kubelet[2541]: I0515 00:38:18.874305 2541 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-v2tq2" podStartSLOduration=5.874288345 podStartE2EDuration="5.874288345s" podCreationTimestamp="2025-05-15 00:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:38:18.873856792 +0000 UTC m=+83.342729803" watchObservedRunningTime="2025-05-15 00:38:18.874288345 +0000 UTC m=+83.343161356" May 15 00:38:20.253074 kubelet[2541]: E0515 00:38:20.253009 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:21.134339 systemd-networkd[1383]: lxc_health: Link UP May 15 00:38:21.140472 systemd-networkd[1383]: lxc_health: Gained carrier May 15 00:38:22.256200 kubelet[2541]: E0515 00:38:22.255735 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:22.593042 systemd[1]: run-containerd-runc-k8s.io-937d47327c0992f80612e39de82a00b2474375115dbdaddd68f491e34f8f73e8-runc.rJUHkc.mount: Deactivated successfully. May 15 00:38:22.688810 systemd-networkd[1383]: lxc_health: Gained IPv6LL May 15 00:38:22.869361 kubelet[2541]: E0515 00:38:22.869243 2541 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:38:26.914700 sshd[4363]: pam_unix(sshd:session): session closed for user core May 15 00:38:26.918221 systemd[1]: sshd@25-10.0.0.154:22-10.0.0.1:56398.service: Deactivated successfully. May 15 00:38:26.920065 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:38:26.920814 systemd-logind[1419]: Session 26 logged out. Waiting for processes to exit. May 15 00:38:26.921833 systemd-logind[1419]: Removed session 26.