May 15 09:38:13.876084 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 09:38:13.876106 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 08:06:05 -00 2025 May 15 09:38:13.876117 kernel: KASLR enabled May 15 09:38:13.876123 kernel: efi: EFI v2.7 by EDK II May 15 09:38:13.876129 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 15 09:38:13.876134 kernel: random: crng init done May 15 09:38:13.876141 kernel: secureboot: Secure boot disabled May 15 09:38:13.876147 kernel: ACPI: Early table checksum verification disabled May 15 09:38:13.876153 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 09:38:13.876160 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 09:38:13.876166 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876172 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876178 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876184 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876191 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876199 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876205 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876211 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876217 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:38:13.876223 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 09:38:13.876230 kernel: NUMA: Failed to initialise from firmware May 15 09:38:13.876236 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:38:13.876242 kernel: NUMA: NODE_DATA [mem 0xdc959800-0xdc95efff] May 15 09:38:13.876248 kernel: Zone ranges: May 15 09:38:13.876254 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:38:13.876262 kernel: DMA32 empty May 15 09:38:13.876268 kernel: Normal empty May 15 09:38:13.876274 kernel: Movable zone start for each node May 15 09:38:13.876280 kernel: Early memory node ranges May 15 09:38:13.876286 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 09:38:13.876293 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 09:38:13.876299 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 09:38:13.876305 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 09:38:13.876311 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 09:38:13.876317 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 09:38:13.876323 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 09:38:13.876329 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:38:13.876337 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 09:38:13.876343 kernel: psci: probing for conduit method from ACPI. May 15 09:38:13.876349 kernel: psci: PSCIv1.1 detected in firmware. May 15 09:38:13.876358 kernel: psci: Using standard PSCI v0.2 function IDs May 15 09:38:13.876365 kernel: psci: Trusted OS migration not required May 15 09:38:13.876371 kernel: psci: SMC Calling Convention v1.1 May 15 09:38:13.876379 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 09:38:13.876386 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 09:38:13.876393 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 09:38:13.876399 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 09:38:13.876406 kernel: Detected PIPT I-cache on CPU0 May 15 09:38:13.876412 kernel: CPU features: detected: GIC system register CPU interface May 15 09:38:13.876419 kernel: CPU features: detected: Hardware dirty bit management May 15 09:38:13.876425 kernel: CPU features: detected: Spectre-v4 May 15 09:38:13.876432 kernel: CPU features: detected: Spectre-BHB May 15 09:38:13.876439 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 09:38:13.876446 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 09:38:13.876453 kernel: CPU features: detected: ARM erratum 1418040 May 15 09:38:13.876460 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 09:38:13.876466 kernel: alternatives: applying boot alternatives May 15 09:38:13.876474 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d0dcc1a3c20c0187ebc71aef3b6915c891fced8fde4a46120a0dd669765b171b May 15 09:38:13.876481 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 09:38:13.876487 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 09:38:13.876494 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 09:38:13.876501 kernel: Fallback order for Node 0: 0 May 15 09:38:13.876507 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 09:38:13.876514 kernel: Policy zone: DMA May 15 09:38:13.876521 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 09:38:13.876528 kernel: software IO TLB: area num 4. May 15 09:38:13.876535 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 09:38:13.876542 kernel: Memory: 2386264K/2572288K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 186024K reserved, 0K cma-reserved) May 15 09:38:13.876548 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 09:38:13.876555 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 09:38:13.876562 kernel: rcu: RCU event tracing is enabled. May 15 09:38:13.876569 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 09:38:13.876575 kernel: Trampoline variant of Tasks RCU enabled. May 15 09:38:13.876582 kernel: Tracing variant of Tasks RCU enabled. May 15 09:38:13.876589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 09:38:13.876596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 09:38:13.876604 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 09:38:13.876610 kernel: GICv3: 256 SPIs implemented May 15 09:38:13.876617 kernel: GICv3: 0 Extended SPIs implemented May 15 09:38:13.876623 kernel: Root IRQ handler: gic_handle_irq May 15 09:38:13.876630 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 09:38:13.876636 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 09:38:13.876643 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 09:38:13.876662 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 09:38:13.876670 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 09:38:13.876677 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 09:38:13.876683 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 09:38:13.876691 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 09:38:13.876698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:38:13.876705 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 09:38:13.876712 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 09:38:13.876718 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 09:38:13.876725 kernel: arm-pv: using stolen time PV May 15 09:38:13.876732 kernel: Console: colour dummy device 80x25 May 15 09:38:13.876739 kernel: ACPI: Core revision 20230628 May 15 09:38:13.876746 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 09:38:13.876752 kernel: pid_max: default: 32768 minimum: 301 May 15 09:38:13.876761 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 09:38:13.876768 kernel: landlock: Up and running. May 15 09:38:13.876774 kernel: SELinux: Initializing. May 15 09:38:13.876781 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 09:38:13.876788 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 09:38:13.876795 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 09:38:13.876802 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 09:38:13.876809 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 09:38:13.876816 kernel: rcu: Hierarchical SRCU implementation. May 15 09:38:13.876824 kernel: rcu: Max phase no-delay instances is 400. May 15 09:38:13.876830 kernel: Platform MSI: ITS@0x8080000 domain created May 15 09:38:13.876837 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 09:38:13.876844 kernel: Remapping and enabling EFI services. May 15 09:38:13.876851 kernel: smp: Bringing up secondary CPUs ... May 15 09:38:13.876857 kernel: Detected PIPT I-cache on CPU1 May 15 09:38:13.876864 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 09:38:13.876871 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 09:38:13.876878 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:38:13.876885 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 09:38:13.876893 kernel: Detected PIPT I-cache on CPU2 May 15 09:38:13.876900 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 09:38:13.876911 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 09:38:13.876919 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:38:13.876926 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 09:38:13.876941 kernel: Detected PIPT I-cache on CPU3 May 15 09:38:13.876949 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 09:38:13.876956 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 09:38:13.876963 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:38:13.876970 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 09:38:13.876979 kernel: smp: Brought up 1 node, 4 CPUs May 15 09:38:13.876986 kernel: SMP: Total of 4 processors activated. May 15 09:38:13.876994 kernel: CPU features: detected: 32-bit EL0 Support May 15 09:38:13.877001 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 09:38:13.877008 kernel: CPU features: detected: Common not Private translations May 15 09:38:13.877015 kernel: CPU features: detected: CRC32 instructions May 15 09:38:13.877023 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 09:38:13.877031 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 09:38:13.877038 kernel: CPU features: detected: LSE atomic instructions May 15 09:38:13.877055 kernel: CPU features: detected: Privileged Access Never May 15 09:38:13.877063 kernel: CPU features: detected: RAS Extension Support May 15 09:38:13.877071 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 09:38:13.877078 kernel: CPU: All CPU(s) started at EL1 May 15 09:38:13.877085 kernel: alternatives: applying system-wide alternatives May 15 09:38:13.877092 kernel: devtmpfs: initialized May 15 09:38:13.877099 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 09:38:13.877109 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 09:38:13.877116 kernel: pinctrl core: initialized pinctrl subsystem May 15 09:38:13.877123 kernel: SMBIOS 3.0.0 present. May 15 09:38:13.877130 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 09:38:13.877137 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 09:38:13.877145 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 09:38:13.877152 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 09:38:13.877159 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 09:38:13.877166 kernel: audit: initializing netlink subsys (disabled) May 15 09:38:13.877175 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 15 09:38:13.877182 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 09:38:13.877189 kernel: cpuidle: using governor menu May 15 09:38:13.877196 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 09:38:13.877204 kernel: ASID allocator initialised with 32768 entries May 15 09:38:13.877211 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 09:38:13.877218 kernel: Serial: AMBA PL011 UART driver May 15 09:38:13.877225 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 09:38:13.877233 kernel: Modules: 0 pages in range for non-PLT usage May 15 09:38:13.877241 kernel: Modules: 508944 pages in range for PLT usage May 15 09:38:13.877248 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 09:38:13.877255 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 09:38:13.877262 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 09:38:13.877270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 09:38:13.877277 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 09:38:13.877284 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 09:38:13.877291 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 09:38:13.877299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 09:38:13.877307 kernel: ACPI: Added _OSI(Module Device) May 15 09:38:13.877314 kernel: ACPI: Added _OSI(Processor Device) May 15 09:38:13.877321 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 09:38:13.877328 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 09:38:13.877335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 09:38:13.877343 kernel: ACPI: Interpreter enabled May 15 09:38:13.877349 kernel: ACPI: Using GIC for interrupt routing May 15 09:38:13.877356 kernel: ACPI: MCFG table detected, 1 entries May 15 09:38:13.877364 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 09:38:13.877372 kernel: printk: console [ttyAMA0] enabled May 15 09:38:13.877379 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 09:38:13.877513 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 09:38:13.877587 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 09:38:13.877652 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 09:38:13.877716 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 09:38:13.877778 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 09:38:13.877790 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 09:38:13.877798 kernel: PCI host bridge to bus 0000:00 May 15 09:38:13.877865 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 09:38:13.877925 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 09:38:13.878000 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 09:38:13.878136 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 09:38:13.878221 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 09:38:13.878300 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 09:38:13.878370 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 09:38:13.878435 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 09:38:13.878502 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 09:38:13.878566 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 09:38:13.878631 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 09:38:13.878697 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 09:38:13.878761 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 09:38:13.878818 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 09:38:13.878874 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 09:38:13.878884 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 09:38:13.878891 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 09:38:13.878899 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 09:38:13.878906 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 09:38:13.878913 kernel: iommu: Default domain type: Translated May 15 09:38:13.878922 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 09:38:13.878937 kernel: efivars: Registered efivars operations May 15 09:38:13.878945 kernel: vgaarb: loaded May 15 09:38:13.878952 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 09:38:13.878959 kernel: VFS: Disk quotas dquot_6.6.0 May 15 09:38:13.878967 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 09:38:13.878974 kernel: pnp: PnP ACPI init May 15 09:38:13.879063 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 09:38:13.879077 kernel: pnp: PnP ACPI: found 1 devices May 15 09:38:13.879085 kernel: NET: Registered PF_INET protocol family May 15 09:38:13.879092 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 09:38:13.879099 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 09:38:13.879107 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 09:38:13.879114 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 09:38:13.879121 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 09:38:13.879129 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 09:38:13.879136 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 09:38:13.879144 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 09:38:13.879152 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 09:38:13.879159 kernel: PCI: CLS 0 bytes, default 64 May 15 09:38:13.879166 kernel: kvm [1]: HYP mode not available May 15 09:38:13.879173 kernel: Initialise system trusted keyrings May 15 09:38:13.879180 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 09:38:13.879188 kernel: Key type asymmetric registered May 15 09:38:13.879195 kernel: Asymmetric key parser 'x509' registered May 15 09:38:13.879202 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 09:38:13.879210 kernel: io scheduler mq-deadline registered May 15 09:38:13.879217 kernel: io scheduler kyber registered May 15 09:38:13.879224 kernel: io scheduler bfq registered May 15 09:38:13.879232 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 09:38:13.879239 kernel: ACPI: button: Power Button [PWRB] May 15 09:38:13.879246 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 09:38:13.879316 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 09:38:13.879326 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 09:38:13.879333 kernel: thunder_xcv, ver 1.0 May 15 09:38:13.879342 kernel: thunder_bgx, ver 1.0 May 15 09:38:13.879349 kernel: nicpf, ver 1.0 May 15 09:38:13.879357 kernel: nicvf, ver 1.0 May 15 09:38:13.879427 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 09:38:13.879488 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T09:38:13 UTC (1747301893) May 15 09:38:13.879498 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 09:38:13.879506 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 09:38:13.879513 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 09:38:13.879523 kernel: watchdog: Hard watchdog permanently disabled May 15 09:38:13.879530 kernel: NET: Registered PF_INET6 protocol family May 15 09:38:13.879537 kernel: Segment Routing with IPv6 May 15 09:38:13.879544 kernel: In-situ OAM (IOAM) with IPv6 May 15 09:38:13.879551 kernel: NET: Registered PF_PACKET protocol family May 15 09:38:13.879558 kernel: Key type dns_resolver registered May 15 09:38:13.879565 kernel: registered taskstats version 1 May 15 09:38:13.879573 kernel: Loading compiled-in X.509 certificates May 15 09:38:13.879580 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 92c83259b69f308571254e31c325f6266f61f369' May 15 09:38:13.879588 kernel: Key type .fscrypt registered May 15 09:38:13.879596 kernel: Key type fscrypt-provisioning registered May 15 09:38:13.879603 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 09:38:13.879611 kernel: ima: Allocated hash algorithm: sha1 May 15 09:38:13.879618 kernel: ima: No architecture policies found May 15 09:38:13.879625 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 09:38:13.879632 kernel: clk: Disabling unused clocks May 15 09:38:13.879639 kernel: Freeing unused kernel memory: 39744K May 15 09:38:13.879646 kernel: Run /init as init process May 15 09:38:13.879655 kernel: with arguments: May 15 09:38:13.879662 kernel: /init May 15 09:38:13.879669 kernel: with environment: May 15 09:38:13.879676 kernel: HOME=/ May 15 09:38:13.879684 kernel: TERM=linux May 15 09:38:13.879691 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 09:38:13.879700 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 09:38:13.879721 systemd[1]: Detected virtualization kvm. May 15 09:38:13.879730 systemd[1]: Detected architecture arm64. May 15 09:38:13.879738 systemd[1]: Running in initrd. May 15 09:38:13.879746 systemd[1]: No hostname configured, using default hostname. May 15 09:38:13.879753 systemd[1]: Hostname set to . May 15 09:38:13.879762 systemd[1]: Initializing machine ID from VM UUID. May 15 09:38:13.879769 systemd[1]: Queued start job for default target initrd.target. May 15 09:38:13.879777 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:38:13.879785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:38:13.879796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 09:38:13.879803 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 09:38:13.879811 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 09:38:13.879819 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 09:38:13.879828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 09:38:13.879837 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 09:38:13.879846 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:38:13.879854 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 09:38:13.879862 systemd[1]: Reached target paths.target - Path Units. May 15 09:38:13.879869 systemd[1]: Reached target slices.target - Slice Units. May 15 09:38:13.879877 systemd[1]: Reached target swap.target - Swaps. May 15 09:38:13.879885 systemd[1]: Reached target timers.target - Timer Units. May 15 09:38:13.879893 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 09:38:13.879901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 09:38:13.879909 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 09:38:13.879918 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 09:38:13.879926 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 09:38:13.879939 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 09:38:13.879947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:38:13.879955 systemd[1]: Reached target sockets.target - Socket Units. May 15 09:38:13.879963 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 09:38:13.879970 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 09:38:13.879978 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 09:38:13.879985 systemd[1]: Starting systemd-fsck-usr.service... May 15 09:38:13.879995 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 09:38:13.880002 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 09:38:13.880010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:38:13.880018 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 09:38:13.880025 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:38:13.880033 systemd[1]: Finished systemd-fsck-usr.service. May 15 09:38:13.880043 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 09:38:13.880057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:38:13.880065 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:38:13.880090 systemd-journald[239]: Collecting audit messages is disabled. May 15 09:38:13.880111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 09:38:13.880119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 09:38:13.880127 systemd-journald[239]: Journal started May 15 09:38:13.880145 systemd-journald[239]: Runtime Journal (/run/log/journal/a4e7f683a4464bdf896d313cbb865261) is 5.9M, max 47.3M, 41.4M free. May 15 09:38:13.871835 systemd-modules-load[240]: Inserted module 'overlay' May 15 09:38:13.881563 systemd[1]: Started systemd-journald.service - Journal Service. May 15 09:38:13.886525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 09:38:13.891084 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 09:38:13.891107 kernel: Bridge firewalling registered May 15 09:38:13.890485 systemd-modules-load[240]: Inserted module 'br_netfilter' May 15 09:38:13.891208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 09:38:13.894417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:38:13.895288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:38:13.897925 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:38:13.900036 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 09:38:13.900911 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:38:13.907660 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:38:13.910094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 09:38:13.917405 dracut-cmdline[276]: dracut-dracut-053 May 15 09:38:13.919638 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d0dcc1a3c20c0187ebc71aef3b6915c891fced8fde4a46120a0dd669765b171b May 15 09:38:13.939448 systemd-resolved[283]: Positive Trust Anchors: May 15 09:38:13.939522 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 09:38:13.939553 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 09:38:13.944175 systemd-resolved[283]: Defaulting to hostname 'linux'. May 15 09:38:13.945501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 09:38:13.946356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 09:38:13.984068 kernel: SCSI subsystem initialized May 15 09:38:13.988064 kernel: Loading iSCSI transport class v2.0-870. May 15 09:38:13.998065 kernel: iscsi: registered transport (tcp) May 15 09:38:14.008070 kernel: iscsi: registered transport (qla4xxx) May 15 09:38:14.008107 kernel: QLogic iSCSI HBA Driver May 15 09:38:14.048703 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 09:38:14.056220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 09:38:14.071260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 09:38:14.071317 kernel: device-mapper: uevent: version 1.0.3 May 15 09:38:14.071330 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 09:38:14.120066 kernel: raid6: neonx8 gen() 15719 MB/s May 15 09:38:14.137068 kernel: raid6: neonx4 gen() 15549 MB/s May 15 09:38:14.154062 kernel: raid6: neonx2 gen() 13149 MB/s May 15 09:38:14.171064 kernel: raid6: neonx1 gen() 10431 MB/s May 15 09:38:14.188072 kernel: raid6: int64x8 gen() 6912 MB/s May 15 09:38:14.205074 kernel: raid6: int64x4 gen() 7296 MB/s May 15 09:38:14.222072 kernel: raid6: int64x2 gen() 6092 MB/s May 15 09:38:14.239072 kernel: raid6: int64x1 gen() 5025 MB/s May 15 09:38:14.239094 kernel: raid6: using algorithm neonx8 gen() 15719 MB/s May 15 09:38:14.256073 kernel: raid6: .... xor() 11885 MB/s, rmw enabled May 15 09:38:14.256096 kernel: raid6: using neon recovery algorithm May 15 09:38:14.261066 kernel: xor: measuring software checksum speed May 15 09:38:14.261082 kernel: 8regs : 19519 MB/sec May 15 09:38:14.261091 kernel: 32regs : 18062 MB/sec May 15 09:38:14.262350 kernel: arm64_neon : 26382 MB/sec May 15 09:38:14.262363 kernel: xor: using function: arm64_neon (26382 MB/sec) May 15 09:38:14.314079 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 09:38:14.324543 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 09:38:14.340176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:38:14.352521 systemd-udevd[463]: Using default interface naming scheme 'v255'. May 15 09:38:14.355611 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:38:14.370201 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 09:38:14.381117 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 15 09:38:14.406094 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 09:38:14.417166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 09:38:14.455478 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:38:14.467201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 09:38:14.480411 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 09:38:14.483524 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 09:38:14.484593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:38:14.485851 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 09:38:14.491306 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 09:38:14.493066 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 09:38:14.492294 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 09:38:14.499205 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 09:38:14.499236 kernel: GPT:9289727 != 19775487 May 15 09:38:14.502817 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 09:38:14.502856 kernel: GPT:9289727 != 19775487 May 15 09:38:14.502867 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 09:38:14.502878 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:38:14.503337 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 09:38:14.509572 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 09:38:14.510560 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:38:14.512958 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:38:14.513910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 09:38:14.514157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:38:14.515959 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:38:14.523077 kernel: BTRFS: device fsid 7f05ae4e-a0c8-4dcf-a71f-4c5b9e94e6f4 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (514) May 15 09:38:14.523115 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522) May 15 09:38:14.531330 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:38:14.540887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:38:14.545641 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 09:38:14.550102 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 09:38:14.556110 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 09:38:14.556944 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 09:38:14.562306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 09:38:14.575245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 09:38:14.577125 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:38:14.581436 disk-uuid[551]: Primary Header is updated. May 15 09:38:14.581436 disk-uuid[551]: Secondary Entries is updated. May 15 09:38:14.581436 disk-uuid[551]: Secondary Header is updated. May 15 09:38:14.585068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:38:14.605356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:38:15.594084 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:38:15.594528 disk-uuid[552]: The operation has completed successfully. May 15 09:38:15.615160 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 09:38:15.615252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 09:38:15.635294 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 09:38:15.639911 sh[573]: Success May 15 09:38:15.652084 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 09:38:15.691346 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 09:38:15.693156 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 09:38:15.693894 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 09:38:15.704101 kernel: BTRFS info (device dm-0): first mount of filesystem 7f05ae4e-a0c8-4dcf-a71f-4c5b9e94e6f4 May 15 09:38:15.704155 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 09:38:15.704176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 09:38:15.704196 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 09:38:15.705099 kernel: BTRFS info (device dm-0): using free space tree May 15 09:38:15.708219 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 09:38:15.709269 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 09:38:15.721261 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 09:38:15.722550 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 09:38:15.731231 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:38:15.731280 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:38:15.731293 kernel: BTRFS info (device vda6): using free space tree May 15 09:38:15.734216 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:38:15.740261 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 09:38:15.742318 kernel: BTRFS info (device vda6): last unmount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:38:15.746973 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 09:38:15.752414 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 09:38:15.812704 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 09:38:15.822252 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 09:38:15.849564 systemd-networkd[763]: lo: Link UP May 15 09:38:15.849572 systemd-networkd[763]: lo: Gained carrier May 15 09:38:15.850321 systemd-networkd[763]: Enumeration completed May 15 09:38:15.850421 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 09:38:15.850770 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:38:15.850773 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:38:15.856820 ignition[666]: Ignition 2.20.0 May 15 09:38:15.851573 systemd-networkd[763]: eth0: Link UP May 15 09:38:15.856827 ignition[666]: Stage: fetch-offline May 15 09:38:15.851576 systemd-networkd[763]: eth0: Gained carrier May 15 09:38:15.856859 ignition[666]: no configs at "/usr/lib/ignition/base.d" May 15 09:38:15.851583 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:38:15.856867 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:38:15.851896 systemd[1]: Reached target network.target - Network. May 15 09:38:15.857027 ignition[666]: parsed url from cmdline: "" May 15 09:38:15.857030 ignition[666]: no config URL provided May 15 09:38:15.857035 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" May 15 09:38:15.857042 ignition[666]: no config at "/usr/lib/ignition/user.ign" May 15 09:38:15.857084 ignition[666]: op(1): [started] loading QEMU firmware config module May 15 09:38:15.857089 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 09:38:15.862407 ignition[666]: op(1): [finished] loading QEMU firmware config module May 15 09:38:15.872089 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 09:38:15.902977 ignition[666]: parsing config with SHA512: edc81ab263f93bdc4f32b4f46e4eb412a04d2d76db967576dc5dd4edf0cbfdcf77b9f30adb3574bc22b586ff66b9e27830d0fbb4c1e5bb5459aaa884d9342245 May 15 09:38:15.907860 unknown[666]: fetched base config from "system" May 15 09:38:15.907870 unknown[666]: fetched user config from "qemu" May 15 09:38:15.908295 ignition[666]: fetch-offline: fetch-offline passed May 15 09:38:15.908367 ignition[666]: Ignition finished successfully May 15 09:38:15.909628 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 09:38:15.911093 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 09:38:15.920236 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 09:38:15.930116 ignition[775]: Ignition 2.20.0 May 15 09:38:15.930126 ignition[775]: Stage: kargs May 15 09:38:15.930274 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 15 09:38:15.930283 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:38:15.931197 ignition[775]: kargs: kargs passed May 15 09:38:15.934013 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 09:38:15.931239 ignition[775]: Ignition finished successfully May 15 09:38:15.942221 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 09:38:15.953268 ignition[784]: Ignition 2.20.0 May 15 09:38:15.953278 ignition[784]: Stage: disks May 15 09:38:15.953416 ignition[784]: no configs at "/usr/lib/ignition/base.d" May 15 09:38:15.953425 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:38:15.954274 ignition[784]: disks: disks passed May 15 09:38:15.957157 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 09:38:15.954313 ignition[784]: Ignition finished successfully May 15 09:38:15.959741 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 09:38:15.961129 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 09:38:15.962353 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 09:38:15.963651 systemd[1]: Reached target sysinit.target - System Initialization. May 15 09:38:15.965042 systemd[1]: Reached target basic.target - Basic System. May 15 09:38:15.973234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 09:38:15.983097 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 09:38:15.987035 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 09:38:15.988955 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 09:38:16.033798 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 09:38:16.035001 kernel: EXT4-fs (vda9): mounted filesystem e3ca107a-d829-49e7-81f2-462a85be67d1 r/w with ordered data mode. Quota mode: none. May 15 09:38:16.034911 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 09:38:16.049161 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 09:38:16.050696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 09:38:16.052684 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 09:38:16.052730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 09:38:16.052753 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 09:38:16.058403 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) May 15 09:38:16.056480 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 09:38:16.058352 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 09:38:16.062433 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:38:16.062449 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:38:16.062460 kernel: BTRFS info (device vda6): using free space tree May 15 09:38:16.065184 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:38:16.065941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 09:38:16.098753 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory May 15 09:38:16.102912 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory May 15 09:38:16.106647 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory May 15 09:38:16.110395 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory May 15 09:38:16.176044 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 09:38:16.183207 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 09:38:16.184470 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 09:38:16.189077 kernel: BTRFS info (device vda6): last unmount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:38:16.203214 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 09:38:16.205599 ignition[916]: INFO : Ignition 2.20.0 May 15 09:38:16.205599 ignition[916]: INFO : Stage: mount May 15 09:38:16.206724 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:38:16.206724 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:38:16.206724 ignition[916]: INFO : mount: mount passed May 15 09:38:16.206724 ignition[916]: INFO : Ignition finished successfully May 15 09:38:16.207997 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 09:38:16.216147 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 09:38:16.702894 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 09:38:16.713231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 09:38:16.718073 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) May 15 09:38:16.720296 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:38:16.720320 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:38:16.720332 kernel: BTRFS info (device vda6): using free space tree May 15 09:38:16.722060 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:38:16.723190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 09:38:16.738466 ignition[947]: INFO : Ignition 2.20.0 May 15 09:38:16.739210 ignition[947]: INFO : Stage: files May 15 09:38:16.739657 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:38:16.739657 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:38:16.741196 ignition[947]: DEBUG : files: compiled without relabeling support, skipping May 15 09:38:16.741196 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 09:38:16.741196 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 09:38:16.744163 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 09:38:16.744163 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 09:38:16.744163 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 09:38:16.744163 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 09:38:16.743173 unknown[947]: wrote ssh authorized keys file for user: core May 15 09:38:16.748947 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 15 09:38:16.812885 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 09:38:17.047601 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 15 09:38:17.049028 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 09:38:17.050375 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 09:38:17.315272 systemd-networkd[763]: eth0: Gained IPv6LL May 15 09:38:17.387837 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 09:38:17.437055 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 09:38:17.438408 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 09:38:17.705421 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 09:38:17.983698 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 09:38:17.983698 ignition[947]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 09:38:17.986807 ignition[947]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 09:38:18.011362 ignition[947]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 09:38:18.014464 ignition[947]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 09:38:18.015541 ignition[947]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 09:38:18.015541 ignition[947]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 09:38:18.015541 ignition[947]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 09:38:18.015541 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 09:38:18.015541 ignition[947]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 09:38:18.015541 ignition[947]: INFO : files: files passed May 15 09:38:18.015541 ignition[947]: INFO : Ignition finished successfully May 15 09:38:18.016523 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 09:38:18.023189 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 09:38:18.025896 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 09:38:18.026974 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 09:38:18.027071 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 09:38:18.032875 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory May 15 09:38:18.036098 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 09:38:18.036098 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 09:38:18.038640 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 09:38:18.039704 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 09:38:18.040942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 09:38:18.049214 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 09:38:18.067843 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 09:38:18.067947 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 09:38:18.069516 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 09:38:18.070768 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 09:38:18.072037 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 09:38:18.072684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 09:38:18.087041 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 09:38:18.095197 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 09:38:18.103439 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 09:38:18.104354 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:38:18.105855 systemd[1]: Stopped target timers.target - Timer Units. May 15 09:38:18.107131 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 09:38:18.107244 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 09:38:18.109037 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 09:38:18.110481 systemd[1]: Stopped target basic.target - Basic System. May 15 09:38:18.111663 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 09:38:18.112939 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 09:38:18.114417 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 09:38:18.115817 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 09:38:18.117135 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 09:38:18.118543 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 09:38:18.120004 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 09:38:18.121242 systemd[1]: Stopped target swap.target - Swaps. May 15 09:38:18.122415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 09:38:18.122520 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 09:38:18.124190 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 09:38:18.125532 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:38:18.126861 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 09:38:18.127110 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:38:18.128384 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 09:38:18.128490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 09:38:18.130503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 09:38:18.130612 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 09:38:18.132009 systemd[1]: Stopped target paths.target - Path Units. May 15 09:38:18.133125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 09:38:18.134158 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:38:18.135401 systemd[1]: Stopped target slices.target - Slice Units. May 15 09:38:18.136480 systemd[1]: Stopped target sockets.target - Socket Units. May 15 09:38:18.137734 systemd[1]: iscsid.socket: Deactivated successfully. May 15 09:38:18.137817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 09:38:18.139286 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 09:38:18.139363 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 09:38:18.140469 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 09:38:18.140572 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 09:38:18.141812 systemd[1]: ignition-files.service: Deactivated successfully. May 15 09:38:18.141907 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 09:38:18.156298 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 09:38:18.157013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 09:38:18.157171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:38:18.159832 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 09:38:18.160530 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 09:38:18.160651 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:38:18.161908 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 09:38:18.162013 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 09:38:18.167673 ignition[1002]: INFO : Ignition 2.20.0 May 15 09:38:18.167673 ignition[1002]: INFO : Stage: umount May 15 09:38:18.169658 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:38:18.169658 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:38:18.169658 ignition[1002]: INFO : umount: umount passed May 15 09:38:18.169658 ignition[1002]: INFO : Ignition finished successfully May 15 09:38:18.169463 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 09:38:18.169547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 09:38:18.171327 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 09:38:18.171739 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 09:38:18.171819 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 09:38:18.173636 systemd[1]: Stopped target network.target - Network. May 15 09:38:18.174679 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 09:38:18.174743 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 09:38:18.175982 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 09:38:18.176024 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 09:38:18.177558 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 09:38:18.177597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 09:38:18.178870 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 09:38:18.178910 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 09:38:18.180372 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 09:38:18.181644 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 09:38:18.188090 systemd-networkd[763]: eth0: DHCPv6 lease lost May 15 09:38:18.189891 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 09:38:18.190012 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 09:38:18.191105 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 09:38:18.191155 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 09:38:18.202204 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 09:38:18.202876 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 09:38:18.202936 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 09:38:18.204544 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:38:18.205977 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 09:38:18.208365 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 09:38:18.211775 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 09:38:18.211850 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 09:38:18.212964 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 09:38:18.213007 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 09:38:18.214371 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 09:38:18.214408 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:38:18.216563 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 09:38:18.216685 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:38:18.220303 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 09:38:18.220387 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 09:38:18.221897 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 09:38:18.221971 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 09:38:18.223092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 09:38:18.223124 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:38:18.224345 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 09:38:18.224386 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 09:38:18.226290 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 09:38:18.226338 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 09:38:18.228635 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 09:38:18.228687 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:38:18.236240 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 09:38:18.237075 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 09:38:18.237132 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:38:18.238942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 09:38:18.238985 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:38:18.241231 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 09:38:18.241307 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 09:38:18.242745 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 09:38:18.242818 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 09:38:18.244249 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 09:38:18.245645 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 09:38:18.245693 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 09:38:18.247655 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 09:38:18.256999 systemd[1]: Switching root. May 15 09:38:18.279848 systemd-journald[239]: Journal stopped May 15 09:38:18.946577 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 15 09:38:18.946641 kernel: SELinux: policy capability network_peer_controls=1 May 15 09:38:18.946654 kernel: SELinux: policy capability open_perms=1 May 15 09:38:18.946664 kernel: SELinux: policy capability extended_socket_class=1 May 15 09:38:18.946674 kernel: SELinux: policy capability always_check_network=0 May 15 09:38:18.946683 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 09:38:18.946693 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 09:38:18.946706 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 09:38:18.946716 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 09:38:18.946725 kernel: audit: type=1403 audit(1747301898.436:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 09:38:18.946736 systemd[1]: Successfully loaded SELinux policy in 32.174ms. May 15 09:38:18.946752 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.233ms. May 15 09:38:18.946764 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 09:38:18.946775 systemd[1]: Detected virtualization kvm. May 15 09:38:18.946786 systemd[1]: Detected architecture arm64. May 15 09:38:18.946796 systemd[1]: Detected first boot. May 15 09:38:18.946808 systemd[1]: Initializing machine ID from VM UUID. May 15 09:38:18.946818 zram_generator::config[1046]: No configuration found. May 15 09:38:18.946830 systemd[1]: Populated /etc with preset unit settings. May 15 09:38:18.946843 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 09:38:18.946854 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 09:38:18.946864 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 09:38:18.946875 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 09:38:18.946888 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 09:38:18.946900 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 09:38:18.946910 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 09:38:18.946932 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 09:38:18.946944 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 09:38:18.946955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 09:38:18.946965 systemd[1]: Created slice user.slice - User and Session Slice. May 15 09:38:18.946975 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:38:18.946986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:38:18.946997 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 09:38:18.947009 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 09:38:18.947020 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 09:38:18.947031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 09:38:18.947041 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 09:38:18.947066 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:38:18.947079 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 09:38:18.947094 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 09:38:18.947105 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 09:38:18.947117 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 09:38:18.947128 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:38:18.947138 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 09:38:18.947150 systemd[1]: Reached target slices.target - Slice Units. May 15 09:38:18.947161 systemd[1]: Reached target swap.target - Swaps. May 15 09:38:18.947172 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 09:38:18.947182 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 09:38:18.947193 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 09:38:18.947204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 09:38:18.947215 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:38:18.947226 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 09:38:18.947236 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 09:38:18.947247 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 09:38:18.947257 systemd[1]: Mounting media.mount - External Media Directory... May 15 09:38:18.947268 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 09:38:18.947278 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 09:38:18.947289 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 09:38:18.947302 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 09:38:18.947312 systemd[1]: Reached target machines.target - Containers. May 15 09:38:18.947322 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 09:38:18.947332 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:38:18.947343 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 09:38:18.947353 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 09:38:18.947363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:38:18.947374 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 09:38:18.947385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:38:18.947403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 09:38:18.947415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:38:18.947434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 09:38:18.947451 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 09:38:18.947462 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 09:38:18.947472 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 09:38:18.947482 systemd[1]: Stopped systemd-fsck-usr.service. May 15 09:38:18.947493 kernel: fuse: init (API version 7.39) May 15 09:38:18.947506 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 09:38:18.947516 kernel: loop: module loaded May 15 09:38:18.947527 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 09:38:18.947537 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 09:38:18.947548 kernel: ACPI: bus type drm_connector registered May 15 09:38:18.947558 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 09:38:18.947569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 09:38:18.947579 systemd[1]: verity-setup.service: Deactivated successfully. May 15 09:38:18.947589 systemd[1]: Stopped verity-setup.service. May 15 09:38:18.947624 systemd-journald[1113]: Collecting audit messages is disabled. May 15 09:38:18.947647 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 09:38:18.947657 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 09:38:18.947668 systemd[1]: Mounted media.mount - External Media Directory. May 15 09:38:18.947680 systemd-journald[1113]: Journal started May 15 09:38:18.947701 systemd-journald[1113]: Runtime Journal (/run/log/journal/a4e7f683a4464bdf896d313cbb865261) is 5.9M, max 47.3M, 41.4M free. May 15 09:38:18.777017 systemd[1]: Queued start job for default target multi-user.target. May 15 09:38:18.795992 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 09:38:18.796312 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 09:38:18.951083 systemd[1]: Started systemd-journald.service - Journal Service. May 15 09:38:18.950225 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 09:38:18.951165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 09:38:18.952077 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 09:38:18.954072 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 09:38:18.955109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:38:18.956220 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 09:38:18.956347 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 09:38:18.957406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:38:18.957530 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:38:18.958554 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 09:38:18.958676 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 09:38:18.959655 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:38:18.959777 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:38:18.961022 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 09:38:18.961163 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 09:38:18.962139 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:38:18.962253 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:38:18.963244 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 09:38:18.964411 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 09:38:18.965503 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 09:38:18.976859 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 09:38:18.987151 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 09:38:18.988823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 09:38:18.989646 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 09:38:18.989671 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 09:38:18.991260 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 09:38:18.993206 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 09:38:18.994905 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 09:38:18.995817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:38:18.997513 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 09:38:18.999076 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 09:38:18.999936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 09:38:19.003199 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 09:38:19.004097 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 09:38:19.006243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:38:19.008181 systemd-journald[1113]: Time spent on flushing to /var/log/journal/a4e7f683a4464bdf896d313cbb865261 is 32.411ms for 858 entries. May 15 09:38:19.008181 systemd-journald[1113]: System Journal (/var/log/journal/a4e7f683a4464bdf896d313cbb865261) is 8.0M, max 195.6M, 187.6M free. May 15 09:38:19.049655 systemd-journald[1113]: Received client request to flush runtime journal. May 15 09:38:19.049707 kernel: loop0: detected capacity change from 0 to 116808 May 15 09:38:19.010275 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 09:38:19.013268 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 09:38:19.015269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:38:19.016507 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 09:38:19.017503 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 09:38:19.018520 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 09:38:19.021352 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 09:38:19.024555 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 09:38:19.027245 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 09:38:19.030467 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 09:38:19.043733 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 09:38:19.047345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:38:19.053774 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 09:38:19.059544 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 09:38:19.062101 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 09:38:19.063294 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 09:38:19.066131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 09:38:19.076293 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 09:38:19.088061 kernel: loop1: detected capacity change from 0 to 201592 May 15 09:38:19.095489 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 15 09:38:19.095508 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. May 15 09:38:19.099230 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:38:19.119079 kernel: loop2: detected capacity change from 0 to 113536 May 15 09:38:19.155735 kernel: loop3: detected capacity change from 0 to 116808 May 15 09:38:19.159088 kernel: loop4: detected capacity change from 0 to 201592 May 15 09:38:19.164073 kernel: loop5: detected capacity change from 0 to 113536 May 15 09:38:19.166455 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 09:38:19.166818 (sd-merge)[1182]: Merged extensions into '/usr'. May 15 09:38:19.171355 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... May 15 09:38:19.171373 systemd[1]: Reloading... May 15 09:38:19.224089 zram_generator::config[1206]: No configuration found. May 15 09:38:19.274641 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 09:38:19.320636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:38:19.355499 systemd[1]: Reloading finished in 183 ms. May 15 09:38:19.387676 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 09:38:19.392148 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 09:38:19.406466 systemd[1]: Starting ensure-sysext.service... May 15 09:38:19.408124 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 09:38:19.418366 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... May 15 09:38:19.418379 systemd[1]: Reloading... May 15 09:38:19.443286 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 09:38:19.443561 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 09:38:19.444410 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 09:38:19.444629 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 15 09:38:19.444679 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 15 09:38:19.449573 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 15 09:38:19.449587 systemd-tmpfiles[1244]: Skipping /boot May 15 09:38:19.456460 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 15 09:38:19.456477 systemd-tmpfiles[1244]: Skipping /boot May 15 09:38:19.471069 zram_generator::config[1273]: No configuration found. May 15 09:38:19.548202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:38:19.583109 systemd[1]: Reloading finished in 164 ms. May 15 09:38:19.601266 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 09:38:19.613494 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:38:19.621889 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:38:19.624104 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 09:38:19.626268 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 09:38:19.630425 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 09:38:19.632717 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:38:19.636756 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 09:38:19.639835 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:38:19.643309 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:38:19.646016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:38:19.648458 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:38:19.649422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:38:19.653039 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:38:19.658863 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:38:19.660433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:38:19.660577 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:38:19.662558 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:38:19.662710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:38:19.665145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 09:38:19.676086 systemd[1]: Finished ensure-sysext.service. May 15 09:38:19.677798 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 09:38:19.680576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:38:19.682126 systemd-udevd[1315]: Using default interface naming scheme 'v255'. May 15 09:38:19.685189 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:38:19.688131 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 09:38:19.689824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:38:19.693269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:38:19.696582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:38:19.701263 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 09:38:19.704935 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 09:38:19.706244 augenrules[1347]: No rules May 15 09:38:19.707707 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 09:38:19.708834 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:38:19.710016 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:38:19.712092 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:38:19.714831 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 09:38:19.716104 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:38:19.716241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:38:19.717287 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 09:38:19.717402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 09:38:19.719835 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:38:19.719988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:38:19.721236 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:38:19.721365 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:38:19.722662 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 09:38:19.754286 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 09:38:19.755784 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 09:38:19.755849 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 09:38:19.755874 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 09:38:19.756102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1362) May 15 09:38:19.756024 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 09:38:19.759391 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 09:38:19.782056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 09:38:19.785277 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 09:38:19.809076 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 09:38:19.859312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:38:19.859576 systemd-networkd[1379]: lo: Link UP May 15 09:38:19.859674 systemd-networkd[1379]: lo: Gained carrier May 15 09:38:19.861835 systemd-networkd[1379]: Enumeration completed May 15 09:38:19.863343 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 09:38:19.868240 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 09:38:19.872433 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 09:38:19.872582 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:38:19.872592 systemd-networkd[1379]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:38:19.873231 systemd-networkd[1379]: eth0: Link UP May 15 09:38:19.873240 systemd-networkd[1379]: eth0: Gained carrier May 15 09:38:19.873255 systemd-networkd[1379]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:38:19.873662 systemd[1]: Reached target time-set.target - System Time Set. May 15 09:38:19.874031 systemd-resolved[1311]: Positive Trust Anchors: May 15 09:38:19.874112 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 09:38:19.874143 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 09:38:19.881083 systemd-resolved[1311]: Defaulting to hostname 'linux'. May 15 09:38:19.884994 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 09:38:19.885850 systemd[1]: Reached target network.target - Network. May 15 09:38:19.886729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 09:38:19.889097 systemd-networkd[1379]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 09:38:19.889589 systemd-timesyncd[1343]: Network configuration changed, trying to establish connection. May 15 09:38:19.890647 systemd-timesyncd[1343]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 09:38:19.890807 systemd-timesyncd[1343]: Initial clock synchronization to Thu 2025-05-15 09:38:20.112148 UTC. May 15 09:38:19.893006 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 09:38:19.901206 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 09:38:19.911674 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:38:19.917765 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 09:38:19.953135 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 09:38:19.954249 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 09:38:19.955028 systemd[1]: Reached target sysinit.target - System Initialization. May 15 09:38:19.955827 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 09:38:19.956754 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 09:38:19.957795 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 09:38:19.958678 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 09:38:19.959584 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 09:38:19.960466 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 09:38:19.960498 systemd[1]: Reached target paths.target - Path Units. May 15 09:38:19.961126 systemd[1]: Reached target timers.target - Timer Units. May 15 09:38:19.962556 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 09:38:19.964540 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 09:38:19.977945 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 09:38:19.979822 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 09:38:19.981085 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 09:38:19.981927 systemd[1]: Reached target sockets.target - Socket Units. May 15 09:38:19.982635 systemd[1]: Reached target basic.target - Basic System. May 15 09:38:19.983319 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 09:38:19.983351 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 09:38:19.984196 systemd[1]: Starting containerd.service - containerd container runtime... May 15 09:38:19.985821 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 09:38:19.986949 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 09:38:19.988724 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 09:38:19.993123 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 09:38:19.993905 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 09:38:19.995182 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 09:38:19.997133 jq[1415]: false May 15 09:38:19.998200 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 09:38:20.001734 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 09:38:20.004069 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 09:38:20.006277 extend-filesystems[1416]: Found loop3 May 15 09:38:20.006277 extend-filesystems[1416]: Found loop4 May 15 09:38:20.006277 extend-filesystems[1416]: Found loop5 May 15 09:38:20.006277 extend-filesystems[1416]: Found vda May 15 09:38:20.006277 extend-filesystems[1416]: Found vda1 May 15 09:38:20.006277 extend-filesystems[1416]: Found vda2 May 15 09:38:20.006277 extend-filesystems[1416]: Found vda3 May 15 09:38:20.014778 extend-filesystems[1416]: Found usr May 15 09:38:20.014778 extend-filesystems[1416]: Found vda4 May 15 09:38:20.014778 extend-filesystems[1416]: Found vda6 May 15 09:38:20.014778 extend-filesystems[1416]: Found vda7 May 15 09:38:20.014778 extend-filesystems[1416]: Found vda9 May 15 09:38:20.014778 extend-filesystems[1416]: Checking size of /dev/vda9 May 15 09:38:20.012223 dbus-daemon[1414]: [system] SELinux support is enabled May 15 09:38:20.007870 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 09:38:20.012635 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 09:38:20.012988 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 09:38:20.014435 systemd[1]: Starting update-engine.service - Update Engine... May 15 09:38:20.016779 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 09:38:20.018224 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 09:38:20.022700 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 09:38:20.027457 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 09:38:20.030474 jq[1430]: true May 15 09:38:20.030023 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 09:38:20.033233 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 09:38:20.033383 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 09:38:20.042987 systemd[1]: motdgen.service: Deactivated successfully. May 15 09:38:20.043167 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 09:38:20.046652 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 09:38:20.046675 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 09:38:20.049107 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1369) May 15 09:38:20.051009 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 09:38:20.051034 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 09:38:20.052276 jq[1437]: true May 15 09:38:20.053796 extend-filesystems[1416]: Resized partition /dev/vda9 May 15 09:38:20.057842 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) May 15 09:38:20.071088 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 09:38:20.068398 (ntainerd)[1447]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 09:38:20.075299 update_engine[1428]: I20250515 09:38:20.073487 1428 main.cc:92] Flatcar Update Engine starting May 15 09:38:20.084269 systemd[1]: Started update-engine.service - Update Engine. May 15 09:38:20.086169 tar[1436]: linux-arm64/LICENSE May 15 09:38:20.086169 tar[1436]: linux-arm64/helm May 15 09:38:20.086460 update_engine[1428]: I20250515 09:38:20.086153 1428 update_check_scheduler.cc:74] Next update check in 2m18s May 15 09:38:20.098085 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 09:38:20.097243 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 09:38:20.109280 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) May 15 09:38:20.109815 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 09:38:20.109815 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 09:38:20.109815 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 09:38:20.119188 extend-filesystems[1416]: Resized filesystem in /dev/vda9 May 15 09:38:20.122059 bash[1467]: Updated "/home/core/.ssh/authorized_keys" May 15 09:38:20.110292 systemd-logind[1424]: New seat seat0. May 15 09:38:20.110984 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 09:38:20.111154 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 09:38:20.113542 systemd[1]: Started systemd-logind.service - User Login Management. May 15 09:38:20.121763 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 09:38:20.129676 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 09:38:20.153934 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 09:38:20.277936 containerd[1447]: time="2025-05-15T09:38:20.277847276Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 09:38:20.307110 containerd[1447]: time="2025-05-15T09:38:20.307056490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.308634 containerd[1447]: time="2025-05-15T09:38:20.308520096Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 09:38:20.308634 containerd[1447]: time="2025-05-15T09:38:20.308553640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 09:38:20.308634 containerd[1447]: time="2025-05-15T09:38:20.308569754Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 09:38:20.308745 containerd[1447]: time="2025-05-15T09:38:20.308713262Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 09:38:20.308745 containerd[1447]: time="2025-05-15T09:38:20.308730198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.308797 containerd[1447]: time="2025-05-15T09:38:20.308787379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:38:20.308819 containerd[1447]: time="2025-05-15T09:38:20.308800821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.308966 containerd[1447]: time="2025-05-15T09:38:20.308940588Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:38:20.308966 containerd[1447]: time="2025-05-15T09:38:20.308964143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.309026 containerd[1447]: time="2025-05-15T09:38:20.308977133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:38:20.309026 containerd[1447]: time="2025-05-15T09:38:20.308987328Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.309063 containerd[1447]: time="2025-05-15T09:38:20.309055567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.309312 containerd[1447]: time="2025-05-15T09:38:20.309290252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 09:38:20.309416 containerd[1447]: time="2025-05-15T09:38:20.309392405Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:38:20.309416 containerd[1447]: time="2025-05-15T09:38:20.309409465Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 09:38:20.309496 containerd[1447]: time="2025-05-15T09:38:20.309480006Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 09:38:20.309567 containerd[1447]: time="2025-05-15T09:38:20.309527280Z" level=info msg="metadata content store policy set" policy=shared May 15 09:38:20.312890 containerd[1447]: time="2025-05-15T09:38:20.312861953Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 09:38:20.312970 containerd[1447]: time="2025-05-15T09:38:20.312908117Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 09:38:20.312970 containerd[1447]: time="2025-05-15T09:38:20.312928054Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 09:38:20.312970 containerd[1447]: time="2025-05-15T09:38:20.312945443Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 09:38:20.312970 containerd[1447]: time="2025-05-15T09:38:20.312962338Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 09:38:20.313164 containerd[1447]: time="2025-05-15T09:38:20.313141075Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 09:38:20.313395 containerd[1447]: time="2025-05-15T09:38:20.313375842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 09:38:20.313503 containerd[1447]: time="2025-05-15T09:38:20.313482517Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 09:38:20.313547 containerd[1447]: time="2025-05-15T09:38:20.313510101Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 09:38:20.313547 containerd[1447]: time="2025-05-15T09:38:20.313529380Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 09:38:20.313547 containerd[1447]: time="2025-05-15T09:38:20.313542946Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313606 containerd[1447]: time="2025-05-15T09:38:20.313555237Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313606 containerd[1447]: time="2025-05-15T09:38:20.313567035Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313606 containerd[1447]: time="2025-05-15T09:38:20.313578833Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313606 containerd[1447]: time="2025-05-15T09:38:20.313592275Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313606 containerd[1447]: time="2025-05-15T09:38:20.313605101Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313617598Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313629272Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313648059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313660679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313671901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313684439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313696 containerd[1447]: time="2025-05-15T09:38:20.313695374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313709104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313719792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313731138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313743182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313758228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313769738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313780426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313791484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313813 containerd[1447]: time="2025-05-15T09:38:20.313805296Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 09:38:20.313964 containerd[1447]: time="2025-05-15T09:38:20.313823795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313964 containerd[1447]: time="2025-05-15T09:38:20.313836415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 09:38:20.313964 containerd[1447]: time="2025-05-15T09:38:20.313856270Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 09:38:20.314147 containerd[1447]: time="2025-05-15T09:38:20.314037844Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314176090Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314273556Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314293412Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314326586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314346523Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314357704Z" level=info msg="NRI interface is disabled by configuration." May 15 09:38:20.314640 containerd[1447]: time="2025-05-15T09:38:20.314371681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 09:38:20.316375 containerd[1447]: time="2025-05-15T09:38:20.315321479Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 09:38:20.316375 containerd[1447]: time="2025-05-15T09:38:20.315446611Z" level=info msg="Connect containerd service" May 15 09:38:20.316375 containerd[1447]: time="2025-05-15T09:38:20.315504450Z" level=info msg="using legacy CRI server" May 15 09:38:20.316375 containerd[1447]: time="2025-05-15T09:38:20.315514316Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 09:38:20.316375 containerd[1447]: time="2025-05-15T09:38:20.315771076Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 09:38:20.316970 containerd[1447]: time="2025-05-15T09:38:20.316942281Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 09:38:20.317388 containerd[1447]: time="2025-05-15T09:38:20.317355251Z" level=info msg="Start subscribing containerd event" May 15 09:38:20.317540 containerd[1447]: time="2025-05-15T09:38:20.317522231Z" level=info msg="Start recovering state" May 15 09:38:20.318115 containerd[1447]: time="2025-05-15T09:38:20.317913496Z" level=info msg="Start event monitor" May 15 09:38:20.318115 containerd[1447]: time="2025-05-15T09:38:20.317937955Z" level=info msg="Start snapshots syncer" May 15 09:38:20.318115 containerd[1447]: time="2025-05-15T09:38:20.317953494Z" level=info msg="Start cni network conf syncer for default" May 15 09:38:20.318115 containerd[1447]: time="2025-05-15T09:38:20.317975569Z" level=info msg="Start streaming server" May 15 09:38:20.318972 containerd[1447]: time="2025-05-15T09:38:20.318891823Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 09:38:20.319252 containerd[1447]: time="2025-05-15T09:38:20.319228620Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 09:38:20.319323 containerd[1447]: time="2025-05-15T09:38:20.319308410Z" level=info msg="containerd successfully booted in 0.043027s" May 15 09:38:20.319375 systemd[1]: Started containerd.service - containerd container runtime. May 15 09:38:20.480577 tar[1436]: linux-arm64/README.md May 15 09:38:20.493366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 09:38:20.821809 sshd_keygen[1431]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 09:38:20.841177 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 09:38:20.852303 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 09:38:20.857426 systemd[1]: issuegen.service: Deactivated successfully. May 15 09:38:20.857610 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 09:38:20.859818 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 09:38:20.872204 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 09:38:20.882362 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 09:38:20.884097 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 09:38:20.885051 systemd[1]: Reached target getty.target - Login Prompts. May 15 09:38:21.478922 systemd-networkd[1379]: eth0: Gained IPv6LL May 15 09:38:21.482156 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 09:38:21.483514 systemd[1]: Reached target network-online.target - Network is Online. May 15 09:38:21.495308 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 09:38:21.497304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:21.499037 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 09:38:21.513275 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 09:38:21.514177 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 09:38:21.515590 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 09:38:21.517520 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 09:38:22.030404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:22.031727 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 09:38:22.034914 (kubelet)[1528]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:38:22.037170 systemd[1]: Startup finished in 546ms (kernel) + 4.736s (initrd) + 3.631s (userspace) = 8.914s. May 15 09:38:22.457336 kubelet[1528]: E0515 09:38:22.457233 1528 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:38:22.460246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:38:22.460402 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:38:26.817764 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 09:38:26.818872 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:53306.service - OpenSSH per-connection server daemon (10.0.0.1:53306). May 15 09:38:26.877216 sshd[1542]: Accepted publickey for core from 10.0.0.1 port 53306 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:26.879157 sshd-session[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:26.886959 systemd-logind[1424]: New session 1 of user core. May 15 09:38:26.887895 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 09:38:26.897270 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 09:38:26.907122 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 09:38:26.909281 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 09:38:26.915194 (systemd)[1546]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 09:38:26.998191 systemd[1546]: Queued start job for default target default.target. May 15 09:38:27.008963 systemd[1546]: Created slice app.slice - User Application Slice. May 15 09:38:27.008991 systemd[1546]: Reached target paths.target - Paths. May 15 09:38:27.009003 systemd[1546]: Reached target timers.target - Timers. May 15 09:38:27.010216 systemd[1546]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 09:38:27.019953 systemd[1546]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 09:38:27.020015 systemd[1546]: Reached target sockets.target - Sockets. May 15 09:38:27.020027 systemd[1546]: Reached target basic.target - Basic System. May 15 09:38:27.020079 systemd[1546]: Reached target default.target - Main User Target. May 15 09:38:27.020104 systemd[1546]: Startup finished in 99ms. May 15 09:38:27.020375 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 09:38:27.021592 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 09:38:27.086020 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:53320.service - OpenSSH per-connection server daemon (10.0.0.1:53320). May 15 09:38:27.125352 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 53320 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:27.126791 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:27.131577 systemd-logind[1424]: New session 2 of user core. May 15 09:38:27.139267 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 09:38:27.194202 sshd[1559]: Connection closed by 10.0.0.1 port 53320 May 15 09:38:27.195040 sshd-session[1557]: pam_unix(sshd:session): session closed for user core May 15 09:38:27.207688 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:53320.service: Deactivated successfully. May 15 09:38:27.210571 systemd[1]: session-2.scope: Deactivated successfully. May 15 09:38:27.212881 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. May 15 09:38:27.221672 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:53334.service - OpenSSH per-connection server daemon (10.0.0.1:53334). May 15 09:38:27.223079 systemd-logind[1424]: Removed session 2. May 15 09:38:27.256107 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 53334 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:27.257349 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:27.262107 systemd-logind[1424]: New session 3 of user core. May 15 09:38:27.272215 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 09:38:27.323579 sshd[1566]: Connection closed by 10.0.0.1 port 53334 May 15 09:38:27.324162 sshd-session[1564]: pam_unix(sshd:session): session closed for user core May 15 09:38:27.333294 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:53334.service: Deactivated successfully. May 15 09:38:27.335396 systemd[1]: session-3.scope: Deactivated successfully. May 15 09:38:27.338218 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. May 15 09:38:27.339678 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:53340.service - OpenSSH per-connection server daemon (10.0.0.1:53340). May 15 09:38:27.340539 systemd-logind[1424]: Removed session 3. May 15 09:38:27.380715 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 53340 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:27.381832 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:27.385698 systemd-logind[1424]: New session 4 of user core. May 15 09:38:27.400226 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 09:38:27.453067 sshd[1573]: Connection closed by 10.0.0.1 port 53340 May 15 09:38:27.453368 sshd-session[1571]: pam_unix(sshd:session): session closed for user core May 15 09:38:27.462346 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:53340.service: Deactivated successfully. May 15 09:38:27.464012 systemd[1]: session-4.scope: Deactivated successfully. May 15 09:38:27.465343 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 15 09:38:27.466979 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:53344.service - OpenSSH per-connection server daemon (10.0.0.1:53344). May 15 09:38:27.468411 systemd-logind[1424]: Removed session 4. May 15 09:38:27.505104 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 53344 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:27.506329 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:27.510125 systemd-logind[1424]: New session 5 of user core. May 15 09:38:27.523221 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 09:38:27.579672 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 09:38:27.579948 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:38:27.598913 sudo[1581]: pam_unix(sudo:session): session closed for user root May 15 09:38:27.600491 sshd[1580]: Connection closed by 10.0.0.1 port 53344 May 15 09:38:27.600994 sshd-session[1578]: pam_unix(sshd:session): session closed for user core May 15 09:38:27.617842 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:53344.service: Deactivated successfully. May 15 09:38:27.619485 systemd[1]: session-5.scope: Deactivated successfully. May 15 09:38:27.620600 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 15 09:38:27.629391 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:53346.service - OpenSSH per-connection server daemon (10.0.0.1:53346). May 15 09:38:27.630366 systemd-logind[1424]: Removed session 5. May 15 09:38:27.664697 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 53346 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:27.666072 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:27.670238 systemd-logind[1424]: New session 6 of user core. May 15 09:38:27.682238 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 09:38:27.735229 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 09:38:27.735511 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:38:27.738536 sudo[1590]: pam_unix(sudo:session): session closed for user root May 15 09:38:27.743386 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 09:38:27.743661 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:38:27.762396 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:38:27.786514 augenrules[1612]: No rules May 15 09:38:27.788194 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:38:27.788435 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:38:27.789913 sudo[1589]: pam_unix(sudo:session): session closed for user root May 15 09:38:27.791839 sshd[1588]: Connection closed by 10.0.0.1 port 53346 May 15 09:38:27.792662 sshd-session[1586]: pam_unix(sshd:session): session closed for user core May 15 09:38:27.801348 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:53346.service: Deactivated successfully. May 15 09:38:27.803243 systemd[1]: session-6.scope: Deactivated successfully. May 15 09:38:27.804867 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 15 09:38:27.820369 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). May 15 09:38:27.821283 systemd-logind[1424]: Removed session 6. May 15 09:38:27.854866 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:38:27.855975 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:38:27.859995 systemd-logind[1424]: New session 7 of user core. May 15 09:38:27.869223 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 09:38:27.921088 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 09:38:27.921365 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:38:28.247290 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 09:38:28.247463 (dockerd)[1644]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 09:38:28.492026 dockerd[1644]: time="2025-05-15T09:38:28.491610186Z" level=info msg="Starting up" May 15 09:38:28.661586 dockerd[1644]: time="2025-05-15T09:38:28.661464217Z" level=info msg="Loading containers: start." May 15 09:38:28.817085 kernel: Initializing XFRM netlink socket May 15 09:38:28.902353 systemd-networkd[1379]: docker0: Link UP May 15 09:38:28.935475 dockerd[1644]: time="2025-05-15T09:38:28.935327943Z" level=info msg="Loading containers: done." May 15 09:38:28.948705 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1759312584-merged.mount: Deactivated successfully. May 15 09:38:28.963782 dockerd[1644]: time="2025-05-15T09:38:28.963337600Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 09:38:28.963782 dockerd[1644]: time="2025-05-15T09:38:28.963449373Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 09:38:28.963782 dockerd[1644]: time="2025-05-15T09:38:28.963567971Z" level=info msg="Daemon has completed initialization" May 15 09:38:29.008230 dockerd[1644]: time="2025-05-15T09:38:29.008166512Z" level=info msg="API listen on /run/docker.sock" May 15 09:38:29.008443 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 09:38:29.620021 containerd[1447]: time="2025-05-15T09:38:29.619980672Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\"" May 15 09:38:30.200313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3581468264.mount: Deactivated successfully. May 15 09:38:31.734609 containerd[1447]: time="2025-05-15T09:38:31.734548842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:31.735090 containerd[1447]: time="2025-05-15T09:38:31.735041563Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.4: active requests=0, bytes read=26233120" May 15 09:38:31.738477 containerd[1447]: time="2025-05-15T09:38:31.735828187Z" level=info msg="ImageCreate event name:\"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:31.739569 containerd[1447]: time="2025-05-15T09:38:31.739519009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:31.740854 containerd[1447]: time="2025-05-15T09:38:31.740744896Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.4\" with image id \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:631c6cc78b2862be4fed7df3384a643ef7297eebadae22e8ef9cbe2e19b6386f\", size \"26229918\" in 2.120724269s" May 15 09:38:31.740854 containerd[1447]: time="2025-05-15T09:38:31.740779273Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.4\" returns image reference \"sha256:ab579d62aa850c7d0eca948aad11fcf813743e3b6c9742241c32cb4f1638968b\"" May 15 09:38:31.741550 containerd[1447]: time="2025-05-15T09:38:31.741505112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\"" May 15 09:38:32.710777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 09:38:32.720215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:32.813392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:32.817309 (kubelet)[1900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:38:32.915495 kubelet[1900]: E0515 09:38:32.915442 1900 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:38:32.919113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:38:32.919255 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:38:33.183512 containerd[1447]: time="2025-05-15T09:38:33.183401432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:33.185022 containerd[1447]: time="2025-05-15T09:38:33.184966555Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.4: active requests=0, bytes read=22529573" May 15 09:38:33.186210 containerd[1447]: time="2025-05-15T09:38:33.186176111Z" level=info msg="ImageCreate event name:\"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:33.188968 containerd[1447]: time="2025-05-15T09:38:33.188933867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:33.190529 containerd[1447]: time="2025-05-15T09:38:33.190493885Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.4\" with image id \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:25e29187ea66f0ff9b9a00114849c3a30b649005c900a8b2a69e3f3fa56448fb\", size \"23971132\" in 1.448952595s" May 15 09:38:33.190562 containerd[1447]: time="2025-05-15T09:38:33.190534362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.4\" returns image reference \"sha256:79534fade29d07745acc698bbf598b0604a9ea1fd7917822c816a74fc0b55965\"" May 15 09:38:33.190999 containerd[1447]: time="2025-05-15T09:38:33.190936998Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\"" May 15 09:38:34.565317 containerd[1447]: time="2025-05-15T09:38:34.565197037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:34.565886 containerd[1447]: time="2025-05-15T09:38:34.565846319Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.4: active requests=0, bytes read=17482175" May 15 09:38:34.566751 containerd[1447]: time="2025-05-15T09:38:34.566721802Z" level=info msg="ImageCreate event name:\"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:34.569607 containerd[1447]: time="2025-05-15T09:38:34.569577106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:34.571638 containerd[1447]: time="2025-05-15T09:38:34.571605694Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.4\" with image id \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:09c55f8dac59a4b8e5e354140f5a4bdd6fa9bd95c42d6bcba6782ed37c31b5a2\", size \"18923752\" in 1.380643741s" May 15 09:38:34.571681 containerd[1447]: time="2025-05-15T09:38:34.571638634Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.4\" returns image reference \"sha256:730fbc2590716b8202fcdd928a813b847575ebf03911a059979257cd6cbb8245\"" May 15 09:38:34.572126 containerd[1447]: time="2025-05-15T09:38:34.572105258Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 09:38:35.559819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount643036294.mount: Deactivated successfully. May 15 09:38:35.780360 containerd[1447]: time="2025-05-15T09:38:35.780184405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:35.781180 containerd[1447]: time="2025-05-15T09:38:35.781137955Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.4: active requests=0, bytes read=27370353" May 15 09:38:35.781881 containerd[1447]: time="2025-05-15T09:38:35.781816482Z" level=info msg="ImageCreate event name:\"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:35.784139 containerd[1447]: time="2025-05-15T09:38:35.784095447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:35.784866 containerd[1447]: time="2025-05-15T09:38:35.784720213Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.4\" with image id \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\", repo tag \"registry.k8s.io/kube-proxy:v1.32.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36\", size \"27369370\" in 1.212585153s" May 15 09:38:35.784866 containerd[1447]: time="2025-05-15T09:38:35.784763133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 09:38:35.785431 containerd[1447]: time="2025-05-15T09:38:35.785256370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 09:38:36.373499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount622023781.mount: Deactivated successfully. May 15 09:38:37.459876 containerd[1447]: time="2025-05-15T09:38:37.459811494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:37.460627 containerd[1447]: time="2025-05-15T09:38:37.460545749Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 15 09:38:37.462562 containerd[1447]: time="2025-05-15T09:38:37.462055136Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:37.465982 containerd[1447]: time="2025-05-15T09:38:37.465910416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:37.467255 containerd[1447]: time="2025-05-15T09:38:37.467117861Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.68182645s" May 15 09:38:37.467255 containerd[1447]: time="2025-05-15T09:38:37.467154285Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 09:38:37.467791 containerd[1447]: time="2025-05-15T09:38:37.467743526Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 09:38:37.894207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4241886266.mount: Deactivated successfully. May 15 09:38:37.898800 containerd[1447]: time="2025-05-15T09:38:37.898749518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:37.899486 containerd[1447]: time="2025-05-15T09:38:37.899436719Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 09:38:37.900144 containerd[1447]: time="2025-05-15T09:38:37.900110441Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:37.902524 containerd[1447]: time="2025-05-15T09:38:37.902490392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:37.903398 containerd[1447]: time="2025-05-15T09:38:37.903372629Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.581647ms" May 15 09:38:37.903398 containerd[1447]: time="2025-05-15T09:38:37.903399947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 09:38:37.903970 containerd[1447]: time="2025-05-15T09:38:37.903796518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 15 09:38:38.490371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55408035.mount: Deactivated successfully. May 15 09:38:40.774826 containerd[1447]: time="2025-05-15T09:38:40.774777413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:40.775795 containerd[1447]: time="2025-05-15T09:38:40.775738973Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 15 09:38:40.776724 containerd[1447]: time="2025-05-15T09:38:40.776697687Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:40.779934 containerd[1447]: time="2025-05-15T09:38:40.779903460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:38:40.784665 containerd[1447]: time="2025-05-15T09:38:40.784548427Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.880721791s" May 15 09:38:40.784665 containerd[1447]: time="2025-05-15T09:38:40.784588785Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 15 09:38:43.068817 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 09:38:43.078210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:43.166990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:43.170560 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:38:43.204591 kubelet[2065]: E0515 09:38:43.204524 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:38:43.207037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:38:43.207293 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:38:45.454111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:45.461247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:45.484586 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... May 15 09:38:45.484602 systemd[1]: Reloading... May 15 09:38:45.548083 zram_generator::config[2119]: No configuration found. May 15 09:38:45.746678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:38:45.798834 systemd[1]: Reloading finished in 313 ms. May 15 09:38:45.843383 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 09:38:45.843456 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 09:38:45.843652 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:45.845725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:45.953595 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:45.957883 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 09:38:45.991103 kubelet[2165]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:38:45.991103 kubelet[2165]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 09:38:45.991103 kubelet[2165]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:38:45.991103 kubelet[2165]: I0515 09:38:45.990333 2165 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 09:38:46.697768 kubelet[2165]: I0515 09:38:46.697720 2165 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 09:38:46.697768 kubelet[2165]: I0515 09:38:46.697754 2165 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 09:38:46.698037 kubelet[2165]: I0515 09:38:46.698011 2165 server.go:954] "Client rotation is on, will bootstrap in background" May 15 09:38:46.734933 kubelet[2165]: E0515 09:38:46.734876 2165 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:46.735093 kubelet[2165]: I0515 09:38:46.734977 2165 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:38:46.742173 kubelet[2165]: E0515 09:38:46.742093 2165 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 09:38:46.742173 kubelet[2165]: I0515 09:38:46.742163 2165 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 09:38:46.744800 kubelet[2165]: I0515 09:38:46.744780 2165 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 09:38:46.745443 kubelet[2165]: I0515 09:38:46.745391 2165 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 09:38:46.745612 kubelet[2165]: I0515 09:38:46.745438 2165 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 09:38:46.745697 kubelet[2165]: I0515 09:38:46.745686 2165 topology_manager.go:138] "Creating topology manager with none policy" May 15 09:38:46.745697 kubelet[2165]: I0515 09:38:46.745694 2165 container_manager_linux.go:304] "Creating device plugin manager" May 15 09:38:46.745904 kubelet[2165]: I0515 09:38:46.745887 2165 state_mem.go:36] "Initialized new in-memory state store" May 15 09:38:46.748310 kubelet[2165]: I0515 09:38:46.748271 2165 kubelet.go:446] "Attempting to sync node with API server" May 15 09:38:46.748310 kubelet[2165]: I0515 09:38:46.748306 2165 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 09:38:46.748369 kubelet[2165]: I0515 09:38:46.748330 2165 kubelet.go:352] "Adding apiserver pod source" May 15 09:38:46.748369 kubelet[2165]: I0515 09:38:46.748340 2165 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 09:38:46.752310 kubelet[2165]: W0515 09:38:46.752256 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:46.752354 kubelet[2165]: E0515 09:38:46.752319 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:46.752434 kubelet[2165]: W0515 09:38:46.752400 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:46.752469 kubelet[2165]: E0515 09:38:46.752435 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:46.753265 kubelet[2165]: I0515 09:38:46.753239 2165 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 09:38:46.753864 kubelet[2165]: I0515 09:38:46.753844 2165 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 09:38:46.753972 kubelet[2165]: W0515 09:38:46.753958 2165 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.754791 2165 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.754829 2165 server.go:1287] "Started kubelet" May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.755259 2165 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.755555 2165 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.755619 2165 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.756005 2165 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 09:38:46.757079 kubelet[2165]: I0515 09:38:46.756505 2165 server.go:490] "Adding debug handlers to kubelet server" May 15 09:38:46.757807 kubelet[2165]: I0515 09:38:46.757782 2165 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 09:38:46.758350 kubelet[2165]: E0515 09:38:46.758315 2165 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:38:46.758408 kubelet[2165]: I0515 09:38:46.758359 2165 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 09:38:46.758528 kubelet[2165]: I0515 09:38:46.758508 2165 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 09:38:46.758585 kubelet[2165]: I0515 09:38:46.758571 2165 reconciler.go:26] "Reconciler: start to sync state" May 15 09:38:46.758911 kubelet[2165]: W0515 09:38:46.758873 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:46.758948 kubelet[2165]: E0515 09:38:46.758917 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:46.759413 kubelet[2165]: I0515 09:38:46.759378 2165 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 09:38:46.759796 kubelet[2165]: E0515 09:38:46.759768 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" May 15 09:38:46.760344 kubelet[2165]: I0515 09:38:46.760320 2165 factory.go:221] Registration of the containerd container factory successfully May 15 09:38:46.760344 kubelet[2165]: I0515 09:38:46.760339 2165 factory.go:221] Registration of the systemd container factory successfully May 15 09:38:46.761097 kubelet[2165]: E0515 09:38:46.760818 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fa9d843936b82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 09:38:46.754806658 +0000 UTC m=+0.793815525,LastTimestamp:2025-05-15 09:38:46.754806658 +0000 UTC m=+0.793815525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 09:38:46.762961 kubelet[2165]: E0515 09:38:46.762940 2165 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 09:38:46.771588 kubelet[2165]: I0515 09:38:46.771530 2165 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 09:38:46.771588 kubelet[2165]: I0515 09:38:46.771551 2165 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 09:38:46.771588 kubelet[2165]: I0515 09:38:46.771570 2165 state_mem.go:36] "Initialized new in-memory state store" May 15 09:38:46.774194 kubelet[2165]: I0515 09:38:46.774092 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 09:38:46.775224 kubelet[2165]: I0515 09:38:46.775193 2165 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 09:38:46.775224 kubelet[2165]: I0515 09:38:46.775218 2165 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 09:38:46.775328 kubelet[2165]: I0515 09:38:46.775239 2165 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 09:38:46.775328 kubelet[2165]: I0515 09:38:46.775255 2165 kubelet.go:2388] "Starting kubelet main sync loop" May 15 09:38:46.775328 kubelet[2165]: E0515 09:38:46.775297 2165 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 09:38:46.859148 kubelet[2165]: E0515 09:38:46.859093 2165 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:38:46.876447 kubelet[2165]: E0515 09:38:46.876390 2165 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 09:38:46.886972 kubelet[2165]: I0515 09:38:46.886867 2165 policy_none.go:49] "None policy: Start" May 15 09:38:46.886972 kubelet[2165]: I0515 09:38:46.886892 2165 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 09:38:46.886972 kubelet[2165]: I0515 09:38:46.886905 2165 state_mem.go:35] "Initializing new in-memory state store" May 15 09:38:46.887177 kubelet[2165]: W0515 09:38:46.887106 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:46.887217 kubelet[2165]: E0515 09:38:46.887173 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:46.891944 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 09:38:46.906064 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 09:38:46.909105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 09:38:46.920145 kubelet[2165]: I0515 09:38:46.919916 2165 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 09:38:46.920275 kubelet[2165]: I0515 09:38:46.920168 2165 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 09:38:46.920275 kubelet[2165]: I0515 09:38:46.920181 2165 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 09:38:46.920616 kubelet[2165]: I0515 09:38:46.920433 2165 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 09:38:46.921624 kubelet[2165]: E0515 09:38:46.921589 2165 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 09:38:46.921677 kubelet[2165]: E0515 09:38:46.921636 2165 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 09:38:46.960372 kubelet[2165]: E0515 09:38:46.960262 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" May 15 09:38:47.021989 kubelet[2165]: I0515 09:38:47.021904 2165 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 09:38:47.022514 kubelet[2165]: E0515 09:38:47.022382 2165 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" May 15 09:38:47.083847 systemd[1]: Created slice kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice - libcontainer container kubepods-burstable-pod2980a8ab51edc665be10a02e33130e15.slice. May 15 09:38:47.107264 kubelet[2165]: E0515 09:38:47.107231 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:47.109460 systemd[1]: Created slice kubepods-burstable-pod3334a9eec4a689a59eb77065a6fa070f.slice - libcontainer container kubepods-burstable-pod3334a9eec4a689a59eb77065a6fa070f.slice. May 15 09:38:47.121082 kubelet[2165]: E0515 09:38:47.120951 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:47.123197 systemd[1]: Created slice kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice - libcontainer container kubepods-burstable-pod5386fe11ed933ab82453de11903c7f47.slice. May 15 09:38:47.124544 kubelet[2165]: E0515 09:38:47.124514 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:47.161946 kubelet[2165]: I0515 09:38:47.161886 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3334a9eec4a689a59eb77065a6fa070f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3334a9eec4a689a59eb77065a6fa070f\") " pod="kube-system/kube-apiserver-localhost" May 15 09:38:47.161946 kubelet[2165]: I0515 09:38:47.161924 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3334a9eec4a689a59eb77065a6fa070f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3334a9eec4a689a59eb77065a6fa070f\") " pod="kube-system/kube-apiserver-localhost" May 15 09:38:47.162094 kubelet[2165]: I0515 09:38:47.161969 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:47.162094 kubelet[2165]: I0515 09:38:47.162026 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:47.162311 kubelet[2165]: I0515 09:38:47.162064 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:47.162356 kubelet[2165]: I0515 09:38:47.162324 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 09:38:47.162356 kubelet[2165]: I0515 09:38:47.162341 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3334a9eec4a689a59eb77065a6fa070f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3334a9eec4a689a59eb77065a6fa070f\") " pod="kube-system/kube-apiserver-localhost" May 15 09:38:47.162356 kubelet[2165]: I0515 09:38:47.162355 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:47.162419 kubelet[2165]: I0515 09:38:47.162370 2165 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:47.183411 kubelet[2165]: E0515 09:38:47.183307 2165 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fa9d843936b82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 09:38:46.754806658 +0000 UTC m=+0.793815525,LastTimestamp:2025-05-15 09:38:46.754806658 +0000 UTC m=+0.793815525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 09:38:47.224627 kubelet[2165]: I0515 09:38:47.224498 2165 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 09:38:47.224870 kubelet[2165]: E0515 09:38:47.224825 2165 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" May 15 09:38:47.361416 kubelet[2165]: E0515 09:38:47.361363 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" May 15 09:38:47.407884 kubelet[2165]: E0515 09:38:47.407856 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:47.408819 containerd[1447]: time="2025-05-15T09:38:47.408774205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,}" May 15 09:38:47.421995 kubelet[2165]: E0515 09:38:47.421729 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:47.422561 containerd[1447]: time="2025-05-15T09:38:47.422298418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3334a9eec4a689a59eb77065a6fa070f,Namespace:kube-system,Attempt:0,}" May 15 09:38:47.425881 kubelet[2165]: E0515 09:38:47.425855 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:47.426313 containerd[1447]: time="2025-05-15T09:38:47.426281654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,}" May 15 09:38:47.626789 kubelet[2165]: I0515 09:38:47.626759 2165 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 09:38:47.627299 kubelet[2165]: E0515 09:38:47.627252 2165 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" May 15 09:38:47.834835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4038595017.mount: Deactivated successfully. May 15 09:38:47.839777 containerd[1447]: time="2025-05-15T09:38:47.839735571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:38:47.840834 containerd[1447]: time="2025-05-15T09:38:47.840765386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 09:38:47.847825 containerd[1447]: time="2025-05-15T09:38:47.846758053Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:38:47.849385 kubelet[2165]: W0515 09:38:47.849351 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:47.849449 kubelet[2165]: E0515 09:38:47.849394 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:47.850450 containerd[1447]: time="2025-05-15T09:38:47.850344751Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:38:47.851592 containerd[1447]: time="2025-05-15T09:38:47.851548096Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 09:38:47.852566 containerd[1447]: time="2025-05-15T09:38:47.852227768Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:38:47.853161 containerd[1447]: time="2025-05-15T09:38:47.853128685Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 09:38:47.853862 containerd[1447]: time="2025-05-15T09:38:47.853836498Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 444.985755ms" May 15 09:38:47.856061 containerd[1447]: time="2025-05-15T09:38:47.856013615Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:38:47.858389 containerd[1447]: time="2025-05-15T09:38:47.858352775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 432.00327ms" May 15 09:38:47.859737 containerd[1447]: time="2025-05-15T09:38:47.859699108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 437.320709ms" May 15 09:38:47.881551 kubelet[2165]: W0515 09:38:47.880597 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:47.883414 kubelet[2165]: E0515 09:38:47.883370 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:47.893443 kubelet[2165]: W0515 09:38:47.893381 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:47.893510 kubelet[2165]: E0515 09:38:47.893448 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:47.975628 kubelet[2165]: W0515 09:38:47.975554 2165 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.103:6443: connect: connection refused May 15 09:38:47.975628 kubelet[2165]: E0515 09:38:47.975628 2165 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" May 15 09:38:48.062797 containerd[1447]: time="2025-05-15T09:38:48.062163072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:38:48.062797 containerd[1447]: time="2025-05-15T09:38:48.062446659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:38:48.062797 containerd[1447]: time="2025-05-15T09:38:48.062562415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:48.062797 containerd[1447]: time="2025-05-15T09:38:48.062588753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:38:48.062797 containerd[1447]: time="2025-05-15T09:38:48.062637304Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:38:48.062797 containerd[1447]: time="2025-05-15T09:38:48.062665243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:48.063011 containerd[1447]: time="2025-05-15T09:38:48.062743574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:48.064477 containerd[1447]: time="2025-05-15T09:38:48.064343307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:38:48.064477 containerd[1447]: time="2025-05-15T09:38:48.064428684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:38:48.064477 containerd[1447]: time="2025-05-15T09:38:48.064443613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:48.065977 containerd[1447]: time="2025-05-15T09:38:48.064945704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:48.065977 containerd[1447]: time="2025-05-15T09:38:48.064571338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:48.107274 systemd[1]: Started cri-containerd-2141e1ad7a1fc9d2066bed810d85b21fb5aefbbb83e9e6d3912050f41e6c7d0e.scope - libcontainer container 2141e1ad7a1fc9d2066bed810d85b21fb5aefbbb83e9e6d3912050f41e6c7d0e. May 15 09:38:48.108802 systemd[1]: Started cri-containerd-b1ba5ddf0c145ee9f20d3ac2b07b74897f5c32216547f80d22eefd1ff2296f2d.scope - libcontainer container b1ba5ddf0c145ee9f20d3ac2b07b74897f5c32216547f80d22eefd1ff2296f2d. May 15 09:38:48.112188 systemd[1]: Started cri-containerd-0e3204efa57200d35429507ff96fcccceaaefb8b165f6808c9cd9e6471f801b3.scope - libcontainer container 0e3204efa57200d35429507ff96fcccceaaefb8b165f6808c9cd9e6471f801b3. May 15 09:38:48.140474 containerd[1447]: time="2025-05-15T09:38:48.140370511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3334a9eec4a689a59eb77065a6fa070f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2141e1ad7a1fc9d2066bed810d85b21fb5aefbbb83e9e6d3912050f41e6c7d0e\"" May 15 09:38:48.142162 kubelet[2165]: E0515 09:38:48.142139 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:48.144094 containerd[1447]: time="2025-05-15T09:38:48.144039686Z" level=info msg="CreateContainer within sandbox \"2141e1ad7a1fc9d2066bed810d85b21fb5aefbbb83e9e6d3912050f41e6c7d0e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 09:38:48.145462 containerd[1447]: time="2025-05-15T09:38:48.145423877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:5386fe11ed933ab82453de11903c7f47,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e3204efa57200d35429507ff96fcccceaaefb8b165f6808c9cd9e6471f801b3\"" May 15 09:38:48.146259 containerd[1447]: time="2025-05-15T09:38:48.146228006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2980a8ab51edc665be10a02e33130e15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1ba5ddf0c145ee9f20d3ac2b07b74897f5c32216547f80d22eefd1ff2296f2d\"" May 15 09:38:48.146808 kubelet[2165]: E0515 09:38:48.146731 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:48.147109 kubelet[2165]: E0515 09:38:48.147033 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:48.148616 containerd[1447]: time="2025-05-15T09:38:48.148584478Z" level=info msg="CreateContainer within sandbox \"0e3204efa57200d35429507ff96fcccceaaefb8b165f6808c9cd9e6471f801b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 09:38:48.148695 containerd[1447]: time="2025-05-15T09:38:48.148600928Z" level=info msg="CreateContainer within sandbox \"b1ba5ddf0c145ee9f20d3ac2b07b74897f5c32216547f80d22eefd1ff2296f2d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 09:38:48.159363 containerd[1447]: time="2025-05-15T09:38:48.159317622Z" level=info msg="CreateContainer within sandbox \"2141e1ad7a1fc9d2066bed810d85b21fb5aefbbb83e9e6d3912050f41e6c7d0e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c1efc1e21aeab5ef1194604cd9e6dffd7dd80ed51a1a5e84d5c80f3905f4839d\"" May 15 09:38:48.160015 containerd[1447]: time="2025-05-15T09:38:48.159985822Z" level=info msg="StartContainer for \"c1efc1e21aeab5ef1194604cd9e6dffd7dd80ed51a1a5e84d5c80f3905f4839d\"" May 15 09:38:48.162578 kubelet[2165]: E0515 09:38:48.162543 2165 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="1.6s" May 15 09:38:48.166255 containerd[1447]: time="2025-05-15T09:38:48.166217364Z" level=info msg="CreateContainer within sandbox \"0e3204efa57200d35429507ff96fcccceaaefb8b165f6808c9cd9e6471f801b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3d194a8a26a12322ad689b01c4be9b5c8e489a9e92fd068024cb658f8ccdc3dd\"" May 15 09:38:48.166947 containerd[1447]: time="2025-05-15T09:38:48.166842375Z" level=info msg="StartContainer for \"3d194a8a26a12322ad689b01c4be9b5c8e489a9e92fd068024cb658f8ccdc3dd\"" May 15 09:38:48.170984 containerd[1447]: time="2025-05-15T09:38:48.170932588Z" level=info msg="CreateContainer within sandbox \"b1ba5ddf0c145ee9f20d3ac2b07b74897f5c32216547f80d22eefd1ff2296f2d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e8d6e1a9027eea0abb2fbe0f07991232d04463df3abe5f61b937449fed7d585\"" May 15 09:38:48.171441 containerd[1447]: time="2025-05-15T09:38:48.171410342Z" level=info msg="StartContainer for \"9e8d6e1a9027eea0abb2fbe0f07991232d04463df3abe5f61b937449fed7d585\"" May 15 09:38:48.189319 systemd[1]: Started cri-containerd-c1efc1e21aeab5ef1194604cd9e6dffd7dd80ed51a1a5e84d5c80f3905f4839d.scope - libcontainer container c1efc1e21aeab5ef1194604cd9e6dffd7dd80ed51a1a5e84d5c80f3905f4839d. May 15 09:38:48.193092 systemd[1]: Started cri-containerd-3d194a8a26a12322ad689b01c4be9b5c8e489a9e92fd068024cb658f8ccdc3dd.scope - libcontainer container 3d194a8a26a12322ad689b01c4be9b5c8e489a9e92fd068024cb658f8ccdc3dd. May 15 09:38:48.196544 systemd[1]: Started cri-containerd-9e8d6e1a9027eea0abb2fbe0f07991232d04463df3abe5f61b937449fed7d585.scope - libcontainer container 9e8d6e1a9027eea0abb2fbe0f07991232d04463df3abe5f61b937449fed7d585. May 15 09:38:48.223382 containerd[1447]: time="2025-05-15T09:38:48.223258750Z" level=info msg="StartContainer for \"c1efc1e21aeab5ef1194604cd9e6dffd7dd80ed51a1a5e84d5c80f3905f4839d\" returns successfully" May 15 09:38:48.260774 containerd[1447]: time="2025-05-15T09:38:48.255523988Z" level=info msg="StartContainer for \"3d194a8a26a12322ad689b01c4be9b5c8e489a9e92fd068024cb658f8ccdc3dd\" returns successfully" May 15 09:38:48.260774 containerd[1447]: time="2025-05-15T09:38:48.255626176Z" level=info msg="StartContainer for \"9e8d6e1a9027eea0abb2fbe0f07991232d04463df3abe5f61b937449fed7d585\" returns successfully" May 15 09:38:48.428638 kubelet[2165]: I0515 09:38:48.428526 2165 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 09:38:48.785316 kubelet[2165]: E0515 09:38:48.785226 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:48.785417 kubelet[2165]: E0515 09:38:48.785344 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:48.787963 kubelet[2165]: E0515 09:38:48.787942 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:48.788076 kubelet[2165]: E0515 09:38:48.788060 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:48.790160 kubelet[2165]: E0515 09:38:48.790142 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:48.790272 kubelet[2165]: E0515 09:38:48.790257 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:49.754348 kubelet[2165]: I0515 09:38:49.754293 2165 apiserver.go:52] "Watching apiserver" May 15 09:38:49.781838 kubelet[2165]: E0515 09:38:49.781776 2165 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 09:38:49.790669 kubelet[2165]: E0515 09:38:49.790639 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:49.790786 kubelet[2165]: E0515 09:38:49.790764 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:49.790786 kubelet[2165]: E0515 09:38:49.790767 2165 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 15 09:38:49.790894 kubelet[2165]: E0515 09:38:49.790869 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:49.813154 kubelet[2165]: I0515 09:38:49.813107 2165 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 09:38:49.859800 kubelet[2165]: I0515 09:38:49.859514 2165 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 09:38:49.859800 kubelet[2165]: I0515 09:38:49.859613 2165 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 09:38:49.867289 kubelet[2165]: E0515 09:38:49.867254 2165 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 15 09:38:49.867289 kubelet[2165]: I0515 09:38:49.867285 2165 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 09:38:49.869165 kubelet[2165]: E0515 09:38:49.869140 2165 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 09:38:49.869428 kubelet[2165]: I0515 09:38:49.869243 2165 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 09:38:49.870885 kubelet[2165]: E0515 09:38:49.870861 2165 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 15 09:38:50.791001 kubelet[2165]: I0515 09:38:50.790864 2165 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 09:38:50.791491 kubelet[2165]: I0515 09:38:50.791012 2165 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 09:38:50.800346 kubelet[2165]: E0515 09:38:50.800289 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:50.800591 kubelet[2165]: E0515 09:38:50.800554 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:51.792865 kubelet[2165]: E0515 09:38:51.792825 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:51.793370 kubelet[2165]: E0515 09:38:51.793141 2165 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:51.917367 systemd[1]: Reloading requested from client PID 2441 ('systemctl') (unit session-7.scope)... May 15 09:38:51.917383 systemd[1]: Reloading... May 15 09:38:51.985085 zram_generator::config[2480]: No configuration found. May 15 09:38:52.069918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:38:52.136960 systemd[1]: Reloading finished in 219 ms. May 15 09:38:52.173025 kubelet[2165]: I0515 09:38:52.172957 2165 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:38:52.173158 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:52.198373 systemd[1]: kubelet.service: Deactivated successfully. May 15 09:38:52.198687 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:52.198790 systemd[1]: kubelet.service: Consumed 1.165s CPU time, 124.5M memory peak, 0B memory swap peak. May 15 09:38:52.209311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:38:52.309736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:38:52.315007 (kubelet)[2522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 09:38:52.359458 kubelet[2522]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:38:52.359458 kubelet[2522]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 09:38:52.359458 kubelet[2522]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:38:52.360678 kubelet[2522]: I0515 09:38:52.360417 2522 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 09:38:52.367321 kubelet[2522]: I0515 09:38:52.367280 2522 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 09:38:52.367862 kubelet[2522]: I0515 09:38:52.367463 2522 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 09:38:52.368076 kubelet[2522]: I0515 09:38:52.368037 2522 server.go:954] "Client rotation is on, will bootstrap in background" May 15 09:38:52.371001 kubelet[2522]: I0515 09:38:52.370956 2522 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 09:38:52.376716 kubelet[2522]: I0515 09:38:52.376665 2522 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:38:52.381987 kubelet[2522]: E0515 09:38:52.381070 2522 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 09:38:52.381987 kubelet[2522]: I0515 09:38:52.381114 2522 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 09:38:52.384223 kubelet[2522]: I0515 09:38:52.384195 2522 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 09:38:52.384845 kubelet[2522]: I0515 09:38:52.384533 2522 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 09:38:52.384845 kubelet[2522]: I0515 09:38:52.384576 2522 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 09:38:52.384845 kubelet[2522]: I0515 09:38:52.384753 2522 topology_manager.go:138] "Creating topology manager with none policy" May 15 09:38:52.384845 kubelet[2522]: I0515 09:38:52.384762 2522 container_manager_linux.go:304] "Creating device plugin manager" May 15 09:38:52.385945 kubelet[2522]: I0515 09:38:52.384828 2522 state_mem.go:36] "Initialized new in-memory state store" May 15 09:38:52.385945 kubelet[2522]: I0515 09:38:52.384970 2522 kubelet.go:446] "Attempting to sync node with API server" May 15 09:38:52.385945 kubelet[2522]: I0515 09:38:52.384982 2522 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 09:38:52.385945 kubelet[2522]: I0515 09:38:52.385000 2522 kubelet.go:352] "Adding apiserver pod source" May 15 09:38:52.385945 kubelet[2522]: I0515 09:38:52.385009 2522 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 09:38:52.385945 kubelet[2522]: I0515 09:38:52.385932 2522 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 09:38:52.386459 kubelet[2522]: I0515 09:38:52.386428 2522 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 09:38:52.386957 kubelet[2522]: I0515 09:38:52.386927 2522 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 09:38:52.387008 kubelet[2522]: I0515 09:38:52.386965 2522 server.go:1287] "Started kubelet" May 15 09:38:52.387348 kubelet[2522]: I0515 09:38:52.387311 2522 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 09:38:52.388273 kubelet[2522]: I0515 09:38:52.388251 2522 server.go:490] "Adding debug handlers to kubelet server" May 15 09:38:52.388377 kubelet[2522]: I0515 09:38:52.388353 2522 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 09:38:52.389668 kubelet[2522]: I0515 09:38:52.389604 2522 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 09:38:52.389832 kubelet[2522]: I0515 09:38:52.389810 2522 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 09:38:52.389944 kubelet[2522]: I0515 09:38:52.389915 2522 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 09:38:52.390454 kubelet[2522]: E0515 09:38:52.390428 2522 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 09:38:52.390510 kubelet[2522]: I0515 09:38:52.390471 2522 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 09:38:52.390691 kubelet[2522]: I0515 09:38:52.390662 2522 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 09:38:52.390798 kubelet[2522]: I0515 09:38:52.390783 2522 reconciler.go:26] "Reconciler: start to sync state" May 15 09:38:52.391812 kubelet[2522]: I0515 09:38:52.391768 2522 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 09:38:52.393291 kubelet[2522]: I0515 09:38:52.393260 2522 factory.go:221] Registration of the containerd container factory successfully May 15 09:38:52.393291 kubelet[2522]: I0515 09:38:52.393278 2522 factory.go:221] Registration of the systemd container factory successfully May 15 09:38:52.396056 kubelet[2522]: E0515 09:38:52.393398 2522 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 09:38:52.428983 kubelet[2522]: I0515 09:38:52.428913 2522 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 09:38:52.430433 kubelet[2522]: I0515 09:38:52.430400 2522 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 09:38:52.430433 kubelet[2522]: I0515 09:38:52.430430 2522 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 09:38:52.430547 kubelet[2522]: I0515 09:38:52.430451 2522 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 09:38:52.430547 kubelet[2522]: I0515 09:38:52.430460 2522 kubelet.go:2388] "Starting kubelet main sync loop" May 15 09:38:52.430547 kubelet[2522]: E0515 09:38:52.430519 2522 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 09:38:52.441250 kubelet[2522]: I0515 09:38:52.441223 2522 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 09:38:52.441250 kubelet[2522]: I0515 09:38:52.441243 2522 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 09:38:52.441387 kubelet[2522]: I0515 09:38:52.441263 2522 state_mem.go:36] "Initialized new in-memory state store" May 15 09:38:52.441466 kubelet[2522]: I0515 09:38:52.441448 2522 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 09:38:52.441496 kubelet[2522]: I0515 09:38:52.441465 2522 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 09:38:52.441496 kubelet[2522]: I0515 09:38:52.441485 2522 policy_none.go:49] "None policy: Start" May 15 09:38:52.441496 kubelet[2522]: I0515 09:38:52.441493 2522 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 09:38:52.441574 kubelet[2522]: I0515 09:38:52.441502 2522 state_mem.go:35] "Initializing new in-memory state store" May 15 09:38:52.441629 kubelet[2522]: I0515 09:38:52.441615 2522 state_mem.go:75] "Updated machine memory state" May 15 09:38:52.445326 kubelet[2522]: I0515 09:38:52.445290 2522 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 09:38:52.445660 kubelet[2522]: I0515 09:38:52.445487 2522 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 09:38:52.445660 kubelet[2522]: I0515 09:38:52.445505 2522 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 09:38:52.446005 kubelet[2522]: I0515 09:38:52.445976 2522 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 09:38:52.447179 kubelet[2522]: E0515 09:38:52.447025 2522 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 09:38:52.531269 kubelet[2522]: I0515 09:38:52.531220 2522 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 09:38:52.531269 kubelet[2522]: I0515 09:38:52.531263 2522 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 15 09:38:52.531504 kubelet[2522]: I0515 09:38:52.531488 2522 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 15 09:38:52.536322 kubelet[2522]: E0515 09:38:52.536283 2522 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 09:38:52.536452 kubelet[2522]: E0515 09:38:52.536383 2522 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 09:38:52.550062 kubelet[2522]: I0515 09:38:52.550021 2522 kubelet_node_status.go:76] "Attempting to register node" node="localhost" May 15 09:38:52.555986 kubelet[2522]: I0515 09:38:52.555953 2522 kubelet_node_status.go:125] "Node was previously registered" node="localhost" May 15 09:38:52.556130 kubelet[2522]: I0515 09:38:52.556041 2522 kubelet_node_status.go:79] "Successfully registered node" node="localhost" May 15 09:38:52.592405 kubelet[2522]: I0515 09:38:52.592283 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3334a9eec4a689a59eb77065a6fa070f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3334a9eec4a689a59eb77065a6fa070f\") " pod="kube-system/kube-apiserver-localhost" May 15 09:38:52.592405 kubelet[2522]: I0515 09:38:52.592331 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3334a9eec4a689a59eb77065a6fa070f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3334a9eec4a689a59eb77065a6fa070f\") " pod="kube-system/kube-apiserver-localhost" May 15 09:38:52.592405 kubelet[2522]: I0515 09:38:52.592352 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:52.592405 kubelet[2522]: I0515 09:38:52.592371 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:52.592405 kubelet[2522]: I0515 09:38:52.592388 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:52.592646 kubelet[2522]: I0515 09:38:52.592403 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2980a8ab51edc665be10a02e33130e15-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2980a8ab51edc665be10a02e33130e15\") " pod="kube-system/kube-scheduler-localhost" May 15 09:38:52.592646 kubelet[2522]: I0515 09:38:52.592417 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3334a9eec4a689a59eb77065a6fa070f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3334a9eec4a689a59eb77065a6fa070f\") " pod="kube-system/kube-apiserver-localhost" May 15 09:38:52.592646 kubelet[2522]: I0515 09:38:52.592431 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:52.592646 kubelet[2522]: I0515 09:38:52.592447 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5386fe11ed933ab82453de11903c7f47-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"5386fe11ed933ab82453de11903c7f47\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:38:52.837589 kubelet[2522]: E0515 09:38:52.837537 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:52.841087 kubelet[2522]: E0515 09:38:52.840970 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:52.841087 kubelet[2522]: E0515 09:38:52.841011 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:52.934302 sudo[2561]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 09:38:52.934588 sudo[2561]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 09:38:53.368833 sudo[2561]: pam_unix(sudo:session): session closed for user root May 15 09:38:53.385592 kubelet[2522]: I0515 09:38:53.385331 2522 apiserver.go:52] "Watching apiserver" May 15 09:38:53.391233 kubelet[2522]: I0515 09:38:53.391202 2522 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 09:38:53.441005 kubelet[2522]: I0515 09:38:53.440621 2522 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 15 09:38:53.442223 kubelet[2522]: E0515 09:38:53.441576 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:53.444115 kubelet[2522]: E0515 09:38:53.442304 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:53.445066 kubelet[2522]: E0515 09:38:53.445012 2522 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 09:38:53.445190 kubelet[2522]: E0515 09:38:53.445164 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:53.469254 kubelet[2522]: I0515 09:38:53.469184 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.468940876 podStartE2EDuration="1.468940876s" podCreationTimestamp="2025-05-15 09:38:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:38:53.468824666 +0000 UTC m=+1.150109898" watchObservedRunningTime="2025-05-15 09:38:53.468940876 +0000 UTC m=+1.150226108" May 15 09:38:53.487439 kubelet[2522]: I0515 09:38:53.487375 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.487347589 podStartE2EDuration="3.487347589s" podCreationTimestamp="2025-05-15 09:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:38:53.480832228 +0000 UTC m=+1.162117460" watchObservedRunningTime="2025-05-15 09:38:53.487347589 +0000 UTC m=+1.168632821" May 15 09:38:54.442481 kubelet[2522]: E0515 09:38:54.442367 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:54.442481 kubelet[2522]: E0515 09:38:54.442450 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:54.443114 kubelet[2522]: E0515 09:38:54.442638 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:55.023030 sudo[1623]: pam_unix(sudo:session): session closed for user root May 15 09:38:55.024193 sshd[1622]: Connection closed by 10.0.0.1 port 53362 May 15 09:38:55.024535 sshd-session[1620]: pam_unix(sshd:session): session closed for user core May 15 09:38:55.027680 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:53362.service: Deactivated successfully. May 15 09:38:55.029296 systemd[1]: session-7.scope: Deactivated successfully. May 15 09:38:55.029502 systemd[1]: session-7.scope: Consumed 7.096s CPU time, 156.4M memory peak, 0B memory swap peak. May 15 09:38:55.030561 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 15 09:38:55.031652 systemd-logind[1424]: Removed session 7. May 15 09:38:55.443753 kubelet[2522]: E0515 09:38:55.443727 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:57.676829 kubelet[2522]: I0515 09:38:57.676778 2522 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 09:38:57.677534 containerd[1447]: time="2025-05-15T09:38:57.677502333Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 09:38:57.678365 kubelet[2522]: I0515 09:38:57.677684 2522 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 09:38:58.383123 kubelet[2522]: I0515 09:38:58.383043 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.383025466 podStartE2EDuration="8.383025466s" podCreationTimestamp="2025-05-15 09:38:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:38:53.488145172 +0000 UTC m=+1.169430484" watchObservedRunningTime="2025-05-15 09:38:58.383025466 +0000 UTC m=+6.064310698" May 15 09:38:58.390380 systemd[1]: Created slice kubepods-besteffort-pod2737820b_f535_4018_b6fd_2ec5b347cb36.slice - libcontainer container kubepods-besteffort-pod2737820b_f535_4018_b6fd_2ec5b347cb36.slice. May 15 09:38:58.415922 systemd[1]: Created slice kubepods-burstable-pod25969c07_3261_434d_8148_b4b33fcf9687.slice - libcontainer container kubepods-burstable-pod25969c07_3261_434d_8148_b4b33fcf9687.slice. May 15 09:38:58.436323 kubelet[2522]: I0515 09:38:58.435410 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-etc-cni-netd\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436323 kubelet[2522]: I0515 09:38:58.435466 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-kernel\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436323 kubelet[2522]: I0515 09:38:58.435510 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2737820b-f535-4018-b6fd-2ec5b347cb36-kube-proxy\") pod \"kube-proxy-bcr8f\" (UID: \"2737820b-f535-4018-b6fd-2ec5b347cb36\") " pod="kube-system/kube-proxy-bcr8f" May 15 09:38:58.436323 kubelet[2522]: I0515 09:38:58.435526 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2737820b-f535-4018-b6fd-2ec5b347cb36-xtables-lock\") pod \"kube-proxy-bcr8f\" (UID: \"2737820b-f535-4018-b6fd-2ec5b347cb36\") " pod="kube-system/kube-proxy-bcr8f" May 15 09:38:58.436323 kubelet[2522]: I0515 09:38:58.435541 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-bpf-maps\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436323 kubelet[2522]: I0515 09:38:58.435557 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cni-path\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436558 kubelet[2522]: I0515 09:38:58.435580 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25969c07-3261-434d-8148-b4b33fcf9687-clustermesh-secrets\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436558 kubelet[2522]: I0515 09:38:58.435598 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-hostproc\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436558 kubelet[2522]: I0515 09:38:58.435616 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-hubble-tls\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436558 kubelet[2522]: I0515 09:38:58.435637 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpz9k\" (UniqueName: \"kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-kube-api-access-bpz9k\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436558 kubelet[2522]: I0515 09:38:58.435653 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-lib-modules\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436558 kubelet[2522]: I0515 09:38:58.435676 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25969c07-3261-434d-8148-b4b33fcf9687-cilium-config-path\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436674 kubelet[2522]: I0515 09:38:58.435692 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2737820b-f535-4018-b6fd-2ec5b347cb36-lib-modules\") pod \"kube-proxy-bcr8f\" (UID: \"2737820b-f535-4018-b6fd-2ec5b347cb36\") " pod="kube-system/kube-proxy-bcr8f" May 15 09:38:58.436674 kubelet[2522]: I0515 09:38:58.435708 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-run\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436674 kubelet[2522]: I0515 09:38:58.435725 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-cgroup\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436674 kubelet[2522]: I0515 09:38:58.435740 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fg4s\" (UniqueName: \"kubernetes.io/projected/2737820b-f535-4018-b6fd-2ec5b347cb36-kube-api-access-4fg4s\") pod \"kube-proxy-bcr8f\" (UID: \"2737820b-f535-4018-b6fd-2ec5b347cb36\") " pod="kube-system/kube-proxy-bcr8f" May 15 09:38:58.436674 kubelet[2522]: I0515 09:38:58.435755 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-xtables-lock\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.436860 kubelet[2522]: I0515 09:38:58.435771 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-net\") pod \"cilium-tv2rz\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " pod="kube-system/cilium-tv2rz" May 15 09:38:58.671804 kubelet[2522]: E0515 09:38:58.671441 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:58.709080 kubelet[2522]: E0515 09:38:58.709043 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:58.709658 containerd[1447]: time="2025-05-15T09:38:58.709586726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcr8f,Uid:2737820b-f535-4018-b6fd-2ec5b347cb36,Namespace:kube-system,Attempt:0,}" May 15 09:38:58.720967 kubelet[2522]: E0515 09:38:58.720900 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:58.721387 containerd[1447]: time="2025-05-15T09:38:58.721315772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tv2rz,Uid:25969c07-3261-434d-8148-b4b33fcf9687,Namespace:kube-system,Attempt:0,}" May 15 09:38:58.731206 containerd[1447]: time="2025-05-15T09:38:58.731058814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:38:58.731206 containerd[1447]: time="2025-05-15T09:38:58.731118033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:38:58.731206 containerd[1447]: time="2025-05-15T09:38:58.731128877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:58.731354 containerd[1447]: time="2025-05-15T09:38:58.731205502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:58.750231 systemd[1]: Started cri-containerd-37b5f5ebbd58cc9f42daede7c97bd2f40c95387a268daf242299bb6d96a34576.scope - libcontainer container 37b5f5ebbd58cc9f42daede7c97bd2f40c95387a268daf242299bb6d96a34576. May 15 09:38:58.771023 containerd[1447]: time="2025-05-15T09:38:58.770984611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bcr8f,Uid:2737820b-f535-4018-b6fd-2ec5b347cb36,Namespace:kube-system,Attempt:0,} returns sandbox id \"37b5f5ebbd58cc9f42daede7c97bd2f40c95387a268daf242299bb6d96a34576\"" May 15 09:38:58.771964 kubelet[2522]: E0515 09:38:58.771848 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:58.776218 containerd[1447]: time="2025-05-15T09:38:58.775226108Z" level=info msg="CreateContainer within sandbox \"37b5f5ebbd58cc9f42daede7c97bd2f40c95387a268daf242299bb6d96a34576\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 09:38:58.783951 containerd[1447]: time="2025-05-15T09:38:58.783691055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:38:58.783951 containerd[1447]: time="2025-05-15T09:38:58.783772201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:38:58.783951 containerd[1447]: time="2025-05-15T09:38:58.783789287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:58.784235 containerd[1447]: time="2025-05-15T09:38:58.784095706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:58.812421 containerd[1447]: time="2025-05-15T09:38:58.811940343Z" level=info msg="CreateContainer within sandbox \"37b5f5ebbd58cc9f42daede7c97bd2f40c95387a268daf242299bb6d96a34576\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f2c8fba4034d96dd2fc853ded2723cf8b25df831854aee6c421a3839c9c11495\"" May 15 09:38:58.813703 containerd[1447]: time="2025-05-15T09:38:58.813442030Z" level=info msg="StartContainer for \"f2c8fba4034d96dd2fc853ded2723cf8b25df831854aee6c421a3839c9c11495\"" May 15 09:38:58.816204 systemd[1]: Started cri-containerd-dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518.scope - libcontainer container dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518. May 15 09:38:58.816926 systemd[1]: Created slice kubepods-besteffort-pod5453131f_82d0_4da3_88ef_4f77543a406e.slice - libcontainer container kubepods-besteffort-pod5453131f_82d0_4da3_88ef_4f77543a406e.slice. May 15 09:38:58.841380 kubelet[2522]: I0515 09:38:58.841310 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6nb8\" (UniqueName: \"kubernetes.io/projected/5453131f-82d0-4da3-88ef-4f77543a406e-kube-api-access-k6nb8\") pod \"cilium-operator-6c4d7847fc-pcpdr\" (UID: \"5453131f-82d0-4da3-88ef-4f77543a406e\") " pod="kube-system/cilium-operator-6c4d7847fc-pcpdr" May 15 09:38:58.841380 kubelet[2522]: I0515 09:38:58.841350 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5453131f-82d0-4da3-88ef-4f77543a406e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pcpdr\" (UID: \"5453131f-82d0-4da3-88ef-4f77543a406e\") " pod="kube-system/cilium-operator-6c4d7847fc-pcpdr" May 15 09:38:58.841489 systemd[1]: Started cri-containerd-f2c8fba4034d96dd2fc853ded2723cf8b25df831854aee6c421a3839c9c11495.scope - libcontainer container f2c8fba4034d96dd2fc853ded2723cf8b25df831854aee6c421a3839c9c11495. May 15 09:38:58.842186 containerd[1447]: time="2025-05-15T09:38:58.842115296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tv2rz,Uid:25969c07-3261-434d-8148-b4b33fcf9687,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\"" May 15 09:38:58.842799 kubelet[2522]: E0515 09:38:58.842778 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:58.844944 containerd[1447]: time="2025-05-15T09:38:58.844773878Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 09:38:58.866394 containerd[1447]: time="2025-05-15T09:38:58.866357523Z" level=info msg="StartContainer for \"f2c8fba4034d96dd2fc853ded2723cf8b25df831854aee6c421a3839c9c11495\" returns successfully" May 15 09:38:59.119688 kubelet[2522]: E0515 09:38:59.119654 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:59.120096 containerd[1447]: time="2025-05-15T09:38:59.120062921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pcpdr,Uid:5453131f-82d0-4da3-88ef-4f77543a406e,Namespace:kube-system,Attempt:0,}" May 15 09:38:59.151011 containerd[1447]: time="2025-05-15T09:38:59.150889592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:38:59.151011 containerd[1447]: time="2025-05-15T09:38:59.150937167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:38:59.151011 containerd[1447]: time="2025-05-15T09:38:59.150947810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:59.151281 containerd[1447]: time="2025-05-15T09:38:59.151076210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:38:59.169300 systemd[1]: Started cri-containerd-09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4.scope - libcontainer container 09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4. May 15 09:38:59.196862 containerd[1447]: time="2025-05-15T09:38:59.196828186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pcpdr,Uid:5453131f-82d0-4da3-88ef-4f77543a406e,Namespace:kube-system,Attempt:0,} returns sandbox id \"09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4\"" May 15 09:38:59.197601 kubelet[2522]: E0515 09:38:59.197579 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:59.452502 kubelet[2522]: E0515 09:38:59.452403 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:59.452681 kubelet[2522]: E0515 09:38:59.452662 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:38:59.461874 kubelet[2522]: I0515 09:38:59.461735 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bcr8f" podStartSLOduration=1.461721533 podStartE2EDuration="1.461721533s" podCreationTimestamp="2025-05-15 09:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:38:59.461505106 +0000 UTC m=+7.142790338" watchObservedRunningTime="2025-05-15 09:38:59.461721533 +0000 UTC m=+7.143006725" May 15 09:39:00.454767 kubelet[2522]: E0515 09:39:00.454462 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:02.036187 kubelet[2522]: E0515 09:39:02.035864 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:02.158716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340230200.mount: Deactivated successfully. May 15 09:39:02.463030 kubelet[2522]: E0515 09:39:02.462770 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:03.461538 kubelet[2522]: E0515 09:39:03.461342 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:04.934926 kubelet[2522]: E0515 09:39:04.934864 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:04.989198 containerd[1447]: time="2025-05-15T09:39:04.989143104Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:39:04.989833 containerd[1447]: time="2025-05-15T09:39:04.989785535Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 09:39:04.990609 containerd[1447]: time="2025-05-15T09:39:04.990577122Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:39:04.992237 containerd[1447]: time="2025-05-15T09:39:04.992205426Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.147393296s" May 15 09:39:04.992237 containerd[1447]: time="2025-05-15T09:39:04.992236913Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 09:39:04.994182 containerd[1447]: time="2025-05-15T09:39:04.994156125Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 09:39:04.995746 containerd[1447]: time="2025-05-15T09:39:04.995709051Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:39:05.026469 containerd[1447]: time="2025-05-15T09:39:05.026419272Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\"" May 15 09:39:05.026997 containerd[1447]: time="2025-05-15T09:39:05.026928466Z" level=info msg="StartContainer for \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\"" May 15 09:39:05.058265 systemd[1]: Started cri-containerd-da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48.scope - libcontainer container da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48. May 15 09:39:05.084782 containerd[1447]: time="2025-05-15T09:39:05.084740733Z" level=info msg="StartContainer for \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\" returns successfully" May 15 09:39:05.124677 update_engine[1428]: I20250515 09:39:05.124084 1428 update_attempter.cc:509] Updating boot flags... May 15 09:39:05.192089 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2966) May 15 09:39:05.197725 systemd[1]: cri-containerd-da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48.scope: Deactivated successfully. May 15 09:39:05.242021 containerd[1447]: time="2025-05-15T09:39:05.234999304Z" level=info msg="shim disconnected" id=da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48 namespace=k8s.io May 15 09:39:05.242021 containerd[1447]: time="2025-05-15T09:39:05.242011114Z" level=warning msg="cleaning up after shim disconnected" id=da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48 namespace=k8s.io May 15 09:39:05.242021 containerd[1447]: time="2025-05-15T09:39:05.242025077Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:39:05.270082 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2966) May 15 09:39:05.288147 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2966) May 15 09:39:05.473482 kubelet[2522]: E0515 09:39:05.473381 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:05.482248 containerd[1447]: time="2025-05-15T09:39:05.482214268Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:39:05.493290 containerd[1447]: time="2025-05-15T09:39:05.493244939Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\"" May 15 09:39:05.493758 containerd[1447]: time="2025-05-15T09:39:05.493689998Z" level=info msg="StartContainer for \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\"" May 15 09:39:05.519230 systemd[1]: Started cri-containerd-a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2.scope - libcontainer container a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2. May 15 09:39:05.540491 containerd[1447]: time="2025-05-15T09:39:05.540446430Z" level=info msg="StartContainer for \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\" returns successfully" May 15 09:39:05.566526 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 09:39:05.566729 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 09:39:05.566801 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 09:39:05.573953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:39:05.574152 systemd[1]: cri-containerd-a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2.scope: Deactivated successfully. May 15 09:39:05.586169 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:39:05.603636 containerd[1447]: time="2025-05-15T09:39:05.603577968Z" level=info msg="shim disconnected" id=a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2 namespace=k8s.io May 15 09:39:05.603636 containerd[1447]: time="2025-05-15T09:39:05.603633420Z" level=warning msg="cleaning up after shim disconnected" id=a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2 namespace=k8s.io May 15 09:39:05.603636 containerd[1447]: time="2025-05-15T09:39:05.603643663Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:39:06.020523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48-rootfs.mount: Deactivated successfully. May 15 09:39:06.413934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3632447132.mount: Deactivated successfully. May 15 09:39:06.478156 kubelet[2522]: E0515 09:39:06.478113 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:06.482685 containerd[1447]: time="2025-05-15T09:39:06.482528631Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:39:06.517034 containerd[1447]: time="2025-05-15T09:39:06.516812132Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\"" May 15 09:39:06.517624 containerd[1447]: time="2025-05-15T09:39:06.517591577Z" level=info msg="StartContainer for \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\"" May 15 09:39:06.547256 systemd[1]: Started cri-containerd-7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989.scope - libcontainer container 7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989. May 15 09:39:06.577158 containerd[1447]: time="2025-05-15T09:39:06.577103370Z" level=info msg="StartContainer for \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\" returns successfully" May 15 09:39:06.588194 systemd[1]: cri-containerd-7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989.scope: Deactivated successfully. May 15 09:39:06.652079 containerd[1447]: time="2025-05-15T09:39:06.651833243Z" level=info msg="shim disconnected" id=7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989 namespace=k8s.io May 15 09:39:06.652079 containerd[1447]: time="2025-05-15T09:39:06.651889816Z" level=warning msg="cleaning up after shim disconnected" id=7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989 namespace=k8s.io May 15 09:39:06.652079 containerd[1447]: time="2025-05-15T09:39:06.651898337Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:39:06.754671 containerd[1447]: time="2025-05-15T09:39:06.754556958Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:39:06.755248 containerd[1447]: time="2025-05-15T09:39:06.755159686Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 09:39:06.756060 containerd[1447]: time="2025-05-15T09:39:06.756026791Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:39:06.757336 containerd[1447]: time="2025-05-15T09:39:06.757301662Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.763113969s" May 15 09:39:06.757419 containerd[1447]: time="2025-05-15T09:39:06.757336510Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 09:39:06.762385 containerd[1447]: time="2025-05-15T09:39:06.761696238Z" level=info msg="CreateContainer within sandbox \"09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 09:39:06.772061 containerd[1447]: time="2025-05-15T09:39:06.772010034Z" level=info msg="CreateContainer within sandbox \"09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\"" May 15 09:39:06.775626 containerd[1447]: time="2025-05-15T09:39:06.775575954Z" level=info msg="StartContainer for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\"" May 15 09:39:06.801209 systemd[1]: Started cri-containerd-f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e.scope - libcontainer container f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e. May 15 09:39:06.822743 containerd[1447]: time="2025-05-15T09:39:06.822687706Z" level=info msg="StartContainer for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" returns successfully" May 15 09:39:07.482014 kubelet[2522]: E0515 09:39:07.481979 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:07.489002 kubelet[2522]: E0515 09:39:07.488895 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:07.495214 containerd[1447]: time="2025-05-15T09:39:07.495143922Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:39:07.526265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259083503.mount: Deactivated successfully. May 15 09:39:07.530979 containerd[1447]: time="2025-05-15T09:39:07.530928613Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\"" May 15 09:39:07.531628 containerd[1447]: time="2025-05-15T09:39:07.531558180Z" level=info msg="StartContainer for \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\"" May 15 09:39:07.542361 kubelet[2522]: I0515 09:39:07.541875 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pcpdr" podStartSLOduration=1.9804486510000001 podStartE2EDuration="9.541855787s" podCreationTimestamp="2025-05-15 09:38:58 +0000 UTC" firstStartedPulling="2025-05-15 09:38:59.198725169 +0000 UTC m=+6.880010401" lastFinishedPulling="2025-05-15 09:39:06.760132305 +0000 UTC m=+14.441417537" observedRunningTime="2025-05-15 09:39:07.511939405 +0000 UTC m=+15.193224637" watchObservedRunningTime="2025-05-15 09:39:07.541855787 +0000 UTC m=+15.223141019" May 15 09:39:07.561733 systemd[1]: Started cri-containerd-9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3.scope - libcontainer container 9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3. May 15 09:39:07.587997 systemd[1]: cri-containerd-9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3.scope: Deactivated successfully. May 15 09:39:07.591113 containerd[1447]: time="2025-05-15T09:39:07.591007866Z" level=info msg="StartContainer for \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\" returns successfully" May 15 09:39:07.644385 containerd[1447]: time="2025-05-15T09:39:07.644329350Z" level=info msg="shim disconnected" id=9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3 namespace=k8s.io May 15 09:39:07.644817 containerd[1447]: time="2025-05-15T09:39:07.644676341Z" level=warning msg="cleaning up after shim disconnected" id=9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3 namespace=k8s.io May 15 09:39:07.644817 containerd[1447]: time="2025-05-15T09:39:07.644693184Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:39:08.020620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3-rootfs.mount: Deactivated successfully. May 15 09:39:08.492635 kubelet[2522]: E0515 09:39:08.492599 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:08.493445 kubelet[2522]: E0515 09:39:08.492666 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:08.495244 containerd[1447]: time="2025-05-15T09:39:08.495204844Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:39:08.518345 containerd[1447]: time="2025-05-15T09:39:08.518208002Z" level=info msg="CreateContainer within sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\"" May 15 09:39:08.519072 containerd[1447]: time="2025-05-15T09:39:08.518874171Z" level=info msg="StartContainer for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\"" May 15 09:39:08.547220 systemd[1]: Started cri-containerd-339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886.scope - libcontainer container 339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886. May 15 09:39:08.578020 containerd[1447]: time="2025-05-15T09:39:08.577977695Z" level=info msg="StartContainer for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" returns successfully" May 15 09:39:08.762836 kubelet[2522]: I0515 09:39:08.761900 2522 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 09:39:08.794333 systemd[1]: Created slice kubepods-burstable-pod13137f64_1100_4d1b_9d8d_b2bf82896fa2.slice - libcontainer container kubepods-burstable-pod13137f64_1100_4d1b_9d8d_b2bf82896fa2.slice. May 15 09:39:08.800224 systemd[1]: Created slice kubepods-burstable-podaf7a4e6e_98e0_48ee_82d3_33257cab29ef.slice - libcontainer container kubepods-burstable-podaf7a4e6e_98e0_48ee_82d3_33257cab29ef.slice. May 15 09:39:08.814243 kubelet[2522]: I0515 09:39:08.814201 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg96g\" (UniqueName: \"kubernetes.io/projected/af7a4e6e-98e0-48ee-82d3-33257cab29ef-kube-api-access-lg96g\") pod \"coredns-668d6bf9bc-cwbzw\" (UID: \"af7a4e6e-98e0-48ee-82d3-33257cab29ef\") " pod="kube-system/coredns-668d6bf9bc-cwbzw" May 15 09:39:08.814243 kubelet[2522]: I0515 09:39:08.814239 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkwzq\" (UniqueName: \"kubernetes.io/projected/13137f64-1100-4d1b-9d8d-b2bf82896fa2-kube-api-access-xkwzq\") pod \"coredns-668d6bf9bc-lcgqx\" (UID: \"13137f64-1100-4d1b-9d8d-b2bf82896fa2\") " pod="kube-system/coredns-668d6bf9bc-lcgqx" May 15 09:39:08.814352 kubelet[2522]: I0515 09:39:08.814261 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/13137f64-1100-4d1b-9d8d-b2bf82896fa2-config-volume\") pod \"coredns-668d6bf9bc-lcgqx\" (UID: \"13137f64-1100-4d1b-9d8d-b2bf82896fa2\") " pod="kube-system/coredns-668d6bf9bc-lcgqx" May 15 09:39:08.814352 kubelet[2522]: I0515 09:39:08.814281 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af7a4e6e-98e0-48ee-82d3-33257cab29ef-config-volume\") pod \"coredns-668d6bf9bc-cwbzw\" (UID: \"af7a4e6e-98e0-48ee-82d3-33257cab29ef\") " pod="kube-system/coredns-668d6bf9bc-cwbzw" May 15 09:39:09.097479 kubelet[2522]: E0515 09:39:09.097428 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:09.098319 containerd[1447]: time="2025-05-15T09:39:09.098279683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lcgqx,Uid:13137f64-1100-4d1b-9d8d-b2bf82896fa2,Namespace:kube-system,Attempt:0,}" May 15 09:39:09.103098 kubelet[2522]: E0515 09:39:09.102840 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:09.104259 containerd[1447]: time="2025-05-15T09:39:09.103934203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwbzw,Uid:af7a4e6e-98e0-48ee-82d3-33257cab29ef,Namespace:kube-system,Attempt:0,}" May 15 09:39:09.501167 kubelet[2522]: E0515 09:39:09.500818 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:09.522103 kubelet[2522]: I0515 09:39:09.521198 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tv2rz" podStartSLOduration=5.371019608 podStartE2EDuration="11.521183205s" podCreationTimestamp="2025-05-15 09:38:58 +0000 UTC" firstStartedPulling="2025-05-15 09:38:58.843801243 +0000 UTC m=+6.525086475" lastFinishedPulling="2025-05-15 09:39:04.99396484 +0000 UTC m=+12.675250072" observedRunningTime="2025-05-15 09:39:09.520889431 +0000 UTC m=+17.202174663" watchObservedRunningTime="2025-05-15 09:39:09.521183205 +0000 UTC m=+17.202468437" May 15 09:39:10.502356 kubelet[2522]: E0515 09:39:10.502275 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:10.810333 systemd-networkd[1379]: cilium_host: Link UP May 15 09:39:10.810450 systemd-networkd[1379]: cilium_net: Link UP May 15 09:39:10.810568 systemd-networkd[1379]: cilium_net: Gained carrier May 15 09:39:10.810696 systemd-networkd[1379]: cilium_host: Gained carrier May 15 09:39:10.905788 systemd-networkd[1379]: cilium_vxlan: Link UP May 15 09:39:10.905949 systemd-networkd[1379]: cilium_vxlan: Gained carrier May 15 09:39:11.134542 systemd-networkd[1379]: cilium_host: Gained IPv6LL May 15 09:39:11.269086 kernel: NET: Registered PF_ALG protocol family May 15 09:39:11.503734 kubelet[2522]: E0515 09:39:11.503638 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:11.524273 systemd-networkd[1379]: cilium_net: Gained IPv6LL May 15 09:39:11.848205 systemd-networkd[1379]: lxc_health: Link UP May 15 09:39:11.848939 systemd-networkd[1379]: lxc_health: Gained carrier May 15 09:39:12.232256 systemd-networkd[1379]: lxc1a08613ebeb1: Link UP May 15 09:39:12.254225 kernel: eth0: renamed from tmp7bb2e May 15 09:39:12.260083 kernel: eth0: renamed from tmp3b43d May 15 09:39:12.269194 systemd-networkd[1379]: tmp7bb2e: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:39:12.269290 systemd-networkd[1379]: tmp7bb2e: Cannot enable IPv6, ignoring: No such file or directory May 15 09:39:12.269322 systemd-networkd[1379]: tmp7bb2e: Cannot configure IPv6 privacy extensions for interface, ignoring: No such file or directory May 15 09:39:12.269335 systemd-networkd[1379]: tmp7bb2e: Cannot disable kernel IPv6 accept_ra for interface, ignoring: No such file or directory May 15 09:39:12.269346 systemd-networkd[1379]: tmp7bb2e: Cannot set IPv6 proxy NDP, ignoring: No such file or directory May 15 09:39:12.269360 systemd-networkd[1379]: tmp7bb2e: Cannot enable promote_secondaries for interface, ignoring: No such file or directory May 15 09:39:12.270031 systemd-networkd[1379]: lxc60be1bc71c01: Link UP May 15 09:39:12.270946 systemd-networkd[1379]: lxc60be1bc71c01: Gained carrier May 15 09:39:12.271224 systemd-networkd[1379]: lxc1a08613ebeb1: Gained carrier May 15 09:39:12.675227 systemd-networkd[1379]: cilium_vxlan: Gained IPv6LL May 15 09:39:12.734886 kubelet[2522]: E0515 09:39:12.734846 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:12.995347 systemd-networkd[1379]: lxc_health: Gained IPv6LL May 15 09:39:13.508957 kubelet[2522]: E0515 09:39:13.508923 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:14.083574 systemd-networkd[1379]: lxc1a08613ebeb1: Gained IPv6LL May 15 09:39:14.275321 systemd-networkd[1379]: lxc60be1bc71c01: Gained IPv6LL May 15 09:39:14.510568 kubelet[2522]: E0515 09:39:14.510126 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:15.810136 containerd[1447]: time="2025-05-15T09:39:15.810020736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:39:15.810136 containerd[1447]: time="2025-05-15T09:39:15.810097707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:39:15.810136 containerd[1447]: time="2025-05-15T09:39:15.810108909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:39:15.810587 containerd[1447]: time="2025-05-15T09:39:15.810186760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:39:15.813117 containerd[1447]: time="2025-05-15T09:39:15.812268571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:39:15.813117 containerd[1447]: time="2025-05-15T09:39:15.812346982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:39:15.813117 containerd[1447]: time="2025-05-15T09:39:15.812358344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:39:15.813117 containerd[1447]: time="2025-05-15T09:39:15.812700792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:39:15.842219 systemd[1]: Started cri-containerd-3b43d0288e82735f6f05b41b87a4227838ec0b0df1febe9b5f7206fba95e8881.scope - libcontainer container 3b43d0288e82735f6f05b41b87a4227838ec0b0df1febe9b5f7206fba95e8881. May 15 09:39:15.843295 systemd[1]: Started cri-containerd-7bb2eac09f8e74d97dcfd757da5c8db73c3f48f056ea32ad37c235afd4831e70.scope - libcontainer container 7bb2eac09f8e74d97dcfd757da5c8db73c3f48f056ea32ad37c235afd4831e70. May 15 09:39:15.856456 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 09:39:15.857497 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 09:39:15.874536 containerd[1447]: time="2025-05-15T09:39:15.874464850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cwbzw,Uid:af7a4e6e-98e0-48ee-82d3-33257cab29ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb2eac09f8e74d97dcfd757da5c8db73c3f48f056ea32ad37c235afd4831e70\"" May 15 09:39:15.875798 kubelet[2522]: E0515 09:39:15.875225 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:15.879366 containerd[1447]: time="2025-05-15T09:39:15.879252122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lcgqx,Uid:13137f64-1100-4d1b-9d8d-b2bf82896fa2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b43d0288e82735f6f05b41b87a4227838ec0b0df1febe9b5f7206fba95e8881\"" May 15 09:39:15.879973 containerd[1447]: time="2025-05-15T09:39:15.879937458Z" level=info msg="CreateContainer within sandbox \"7bb2eac09f8e74d97dcfd757da5c8db73c3f48f056ea32ad37c235afd4831e70\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 09:39:15.880794 kubelet[2522]: E0515 09:39:15.880724 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:15.884210 containerd[1447]: time="2025-05-15T09:39:15.884184413Z" level=info msg="CreateContainer within sandbox \"3b43d0288e82735f6f05b41b87a4227838ec0b0df1febe9b5f7206fba95e8881\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 09:39:15.902637 containerd[1447]: time="2025-05-15T09:39:15.902593434Z" level=info msg="CreateContainer within sandbox \"7bb2eac09f8e74d97dcfd757da5c8db73c3f48f056ea32ad37c235afd4831e70\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ad7bb6cd675f525f66e4d6835a9d4bb482d316cba17130b4e37a254c4d94eb7\"" May 15 09:39:15.903215 containerd[1447]: time="2025-05-15T09:39:15.903188757Z" level=info msg="StartContainer for \"1ad7bb6cd675f525f66e4d6835a9d4bb482d316cba17130b4e37a254c4d94eb7\"" May 15 09:39:15.905436 containerd[1447]: time="2025-05-15T09:39:15.905339379Z" level=info msg="CreateContainer within sandbox \"3b43d0288e82735f6f05b41b87a4227838ec0b0df1febe9b5f7206fba95e8881\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6fe953ab20db90f8d1152b6bb8ede7211f35301a04d1d1080860c6158355e28d\"" May 15 09:39:15.905804 containerd[1447]: time="2025-05-15T09:39:15.905762998Z" level=info msg="StartContainer for \"6fe953ab20db90f8d1152b6bb8ede7211f35301a04d1d1080860c6158355e28d\"" May 15 09:39:15.933229 systemd[1]: Started cri-containerd-1ad7bb6cd675f525f66e4d6835a9d4bb482d316cba17130b4e37a254c4d94eb7.scope - libcontainer container 1ad7bb6cd675f525f66e4d6835a9d4bb482d316cba17130b4e37a254c4d94eb7. May 15 09:39:15.935860 systemd[1]: Started cri-containerd-6fe953ab20db90f8d1152b6bb8ede7211f35301a04d1d1080860c6158355e28d.scope - libcontainer container 6fe953ab20db90f8d1152b6bb8ede7211f35301a04d1d1080860c6158355e28d. May 15 09:39:15.959301 containerd[1447]: time="2025-05-15T09:39:15.959229813Z" level=info msg="StartContainer for \"6fe953ab20db90f8d1152b6bb8ede7211f35301a04d1d1080860c6158355e28d\" returns successfully" May 15 09:39:15.963150 containerd[1447]: time="2025-05-15T09:39:15.961720522Z" level=info msg="StartContainer for \"1ad7bb6cd675f525f66e4d6835a9d4bb482d316cba17130b4e37a254c4d94eb7\" returns successfully" May 15 09:39:16.514615 kubelet[2522]: E0515 09:39:16.514562 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:16.518019 kubelet[2522]: E0515 09:39:16.517993 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:16.527896 kubelet[2522]: I0515 09:39:16.527822 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cwbzw" podStartSLOduration=18.527800397 podStartE2EDuration="18.527800397s" podCreationTimestamp="2025-05-15 09:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:39:16.52492265 +0000 UTC m=+24.206207882" watchObservedRunningTime="2025-05-15 09:39:16.527800397 +0000 UTC m=+24.209085669" May 15 09:39:16.546569 kubelet[2522]: I0515 09:39:16.546457 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lcgqx" podStartSLOduration=18.546441422 podStartE2EDuration="18.546441422s" podCreationTimestamp="2025-05-15 09:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:39:16.545499656 +0000 UTC m=+24.226784888" watchObservedRunningTime="2025-05-15 09:39:16.546441422 +0000 UTC m=+24.227726614" May 15 09:39:17.519684 kubelet[2522]: E0515 09:39:17.519596 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:17.519684 kubelet[2522]: E0515 09:39:17.519650 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:18.522308 kubelet[2522]: E0515 09:39:18.521990 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:18.522308 kubelet[2522]: E0515 09:39:18.522070 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:39:19.035777 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:52758.service - OpenSSH per-connection server daemon (10.0.0.1:52758). May 15 09:39:19.080348 sshd[3932]: Accepted publickey for core from 10.0.0.1 port 52758 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:19.081835 sshd-session[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:19.086114 systemd-logind[1424]: New session 8 of user core. May 15 09:39:19.097257 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 09:39:19.217508 sshd[3934]: Connection closed by 10.0.0.1 port 52758 May 15 09:39:19.217842 sshd-session[3932]: pam_unix(sshd:session): session closed for user core May 15 09:39:19.221350 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:52758.service: Deactivated successfully. May 15 09:39:19.223012 systemd[1]: session-8.scope: Deactivated successfully. May 15 09:39:19.223602 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. May 15 09:39:19.224449 systemd-logind[1424]: Removed session 8. May 15 09:39:24.235501 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:44202.service - OpenSSH per-connection server daemon (10.0.0.1:44202). May 15 09:39:24.273279 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 44202 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:24.274582 sshd-session[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:24.278091 systemd-logind[1424]: New session 9 of user core. May 15 09:39:24.288218 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 09:39:24.395528 sshd[3953]: Connection closed by 10.0.0.1 port 44202 May 15 09:39:24.396080 sshd-session[3951]: pam_unix(sshd:session): session closed for user core May 15 09:39:24.399094 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:44202.service: Deactivated successfully. May 15 09:39:24.401131 systemd[1]: session-9.scope: Deactivated successfully. May 15 09:39:24.401667 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. May 15 09:39:24.402657 systemd-logind[1424]: Removed session 9. May 15 09:39:29.406788 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:44206.service - OpenSSH per-connection server daemon (10.0.0.1:44206). May 15 09:39:29.447001 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 44206 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:29.448102 sshd-session[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:29.451451 systemd-logind[1424]: New session 10 of user core. May 15 09:39:29.460250 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 09:39:29.569164 sshd[3972]: Connection closed by 10.0.0.1 port 44206 May 15 09:39:29.569625 sshd-session[3970]: pam_unix(sshd:session): session closed for user core May 15 09:39:29.572795 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:44206.service: Deactivated successfully. May 15 09:39:29.574442 systemd[1]: session-10.scope: Deactivated successfully. May 15 09:39:29.574996 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. May 15 09:39:29.575782 systemd-logind[1424]: Removed session 10. May 15 09:39:34.580498 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:50016.service - OpenSSH per-connection server daemon (10.0.0.1:50016). May 15 09:39:34.621784 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 50016 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:34.622880 sshd-session[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:34.626134 systemd-logind[1424]: New session 11 of user core. May 15 09:39:34.638176 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 09:39:34.744302 sshd[3987]: Connection closed by 10.0.0.1 port 50016 May 15 09:39:34.744630 sshd-session[3985]: pam_unix(sshd:session): session closed for user core May 15 09:39:34.758562 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:50016.service: Deactivated successfully. May 15 09:39:34.760119 systemd[1]: session-11.scope: Deactivated successfully. May 15 09:39:34.762179 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. May 15 09:39:34.764044 systemd-logind[1424]: Removed session 11. May 15 09:39:34.766222 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:50032.service - OpenSSH per-connection server daemon (10.0.0.1:50032). May 15 09:39:34.803336 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 50032 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:34.804608 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:34.809118 systemd-logind[1424]: New session 12 of user core. May 15 09:39:34.818194 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 09:39:34.963513 sshd[4003]: Connection closed by 10.0.0.1 port 50032 May 15 09:39:34.964591 sshd-session[4001]: pam_unix(sshd:session): session closed for user core May 15 09:39:34.971023 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:50032.service: Deactivated successfully. May 15 09:39:34.973245 systemd[1]: session-12.scope: Deactivated successfully. May 15 09:39:34.975026 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. May 15 09:39:34.985549 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:50042.service - OpenSSH per-connection server daemon (10.0.0.1:50042). May 15 09:39:34.986695 systemd-logind[1424]: Removed session 12. May 15 09:39:35.025263 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 50042 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:35.026241 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:35.031540 systemd-logind[1424]: New session 13 of user core. May 15 09:39:35.037193 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 09:39:35.145490 sshd[4017]: Connection closed by 10.0.0.1 port 50042 May 15 09:39:35.146003 sshd-session[4015]: pam_unix(sshd:session): session closed for user core May 15 09:39:35.149114 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:50042.service: Deactivated successfully. May 15 09:39:35.151273 systemd[1]: session-13.scope: Deactivated successfully. May 15 09:39:35.151881 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. May 15 09:39:35.153148 systemd-logind[1424]: Removed session 13. May 15 09:39:40.156743 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:50048.service - OpenSSH per-connection server daemon (10.0.0.1:50048). May 15 09:39:40.195040 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 50048 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:40.196205 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:40.199818 systemd-logind[1424]: New session 14 of user core. May 15 09:39:40.209197 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 09:39:40.316162 sshd[4031]: Connection closed by 10.0.0.1 port 50048 May 15 09:39:40.316641 sshd-session[4029]: pam_unix(sshd:session): session closed for user core May 15 09:39:40.319351 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:50048.service: Deactivated successfully. May 15 09:39:40.321489 systemd[1]: session-14.scope: Deactivated successfully. May 15 09:39:40.323391 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. May 15 09:39:40.324633 systemd-logind[1424]: Removed session 14. May 15 09:39:45.326535 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:60812.service - OpenSSH per-connection server daemon (10.0.0.1:60812). May 15 09:39:45.364957 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 60812 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:45.366130 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:45.369715 systemd-logind[1424]: New session 15 of user core. May 15 09:39:45.384195 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 09:39:45.492143 sshd[4045]: Connection closed by 10.0.0.1 port 60812 May 15 09:39:45.492605 sshd-session[4043]: pam_unix(sshd:session): session closed for user core May 15 09:39:45.504529 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:60812.service: Deactivated successfully. May 15 09:39:45.505960 systemd[1]: session-15.scope: Deactivated successfully. May 15 09:39:45.508110 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. May 15 09:39:45.519274 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:60824.service - OpenSSH per-connection server daemon (10.0.0.1:60824). May 15 09:39:45.520058 systemd-logind[1424]: Removed session 15. May 15 09:39:45.552860 sshd[4057]: Accepted publickey for core from 10.0.0.1 port 60824 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:45.553956 sshd-session[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:45.557303 systemd-logind[1424]: New session 16 of user core. May 15 09:39:45.574196 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 09:39:45.772164 sshd[4059]: Connection closed by 10.0.0.1 port 60824 May 15 09:39:45.772019 sshd-session[4057]: pam_unix(sshd:session): session closed for user core May 15 09:39:45.783614 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:60824.service: Deactivated successfully. May 15 09:39:45.785907 systemd[1]: session-16.scope: Deactivated successfully. May 15 09:39:45.787451 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. May 15 09:39:45.789030 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:60834.service - OpenSSH per-connection server daemon (10.0.0.1:60834). May 15 09:39:45.789734 systemd-logind[1424]: Removed session 16. May 15 09:39:45.832764 sshd[4070]: Accepted publickey for core from 10.0.0.1 port 60834 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:45.834036 sshd-session[4070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:45.837686 systemd-logind[1424]: New session 17 of user core. May 15 09:39:45.848217 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 09:39:46.543775 sshd[4073]: Connection closed by 10.0.0.1 port 60834 May 15 09:39:46.544722 sshd-session[4070]: pam_unix(sshd:session): session closed for user core May 15 09:39:46.555366 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:60834.service: Deactivated successfully. May 15 09:39:46.557907 systemd[1]: session-17.scope: Deactivated successfully. May 15 09:39:46.563184 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. May 15 09:39:46.569393 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:60850.service - OpenSSH per-connection server daemon (10.0.0.1:60850). May 15 09:39:46.570645 systemd-logind[1424]: Removed session 17. May 15 09:39:46.608456 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 60850 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:46.609644 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:46.613712 systemd-logind[1424]: New session 18 of user core. May 15 09:39:46.621202 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 09:39:46.831656 sshd[4094]: Connection closed by 10.0.0.1 port 60850 May 15 09:39:46.832154 sshd-session[4092]: pam_unix(sshd:session): session closed for user core May 15 09:39:46.842940 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:60850.service: Deactivated successfully. May 15 09:39:46.844370 systemd[1]: session-18.scope: Deactivated successfully. May 15 09:39:46.845939 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. May 15 09:39:46.847269 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:60852.service - OpenSSH per-connection server daemon (10.0.0.1:60852). May 15 09:39:46.848392 systemd-logind[1424]: Removed session 18. May 15 09:39:46.892420 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 60852 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:46.893858 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:46.898068 systemd-logind[1424]: New session 19 of user core. May 15 09:39:46.908224 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 09:39:47.018561 sshd[4107]: Connection closed by 10.0.0.1 port 60852 May 15 09:39:47.018897 sshd-session[4105]: pam_unix(sshd:session): session closed for user core May 15 09:39:47.022091 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:60852.service: Deactivated successfully. May 15 09:39:47.023754 systemd[1]: session-19.scope: Deactivated successfully. May 15 09:39:47.024363 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. May 15 09:39:47.025082 systemd-logind[1424]: Removed session 19. May 15 09:39:52.030195 systemd[1]: Started sshd@19-10.0.0.103:22-10.0.0.1:60860.service - OpenSSH per-connection server daemon (10.0.0.1:60860). May 15 09:39:52.066925 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 60860 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:52.067975 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:52.071589 systemd-logind[1424]: New session 20 of user core. May 15 09:39:52.077180 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 09:39:52.180349 sshd[4125]: Connection closed by 10.0.0.1 port 60860 May 15 09:39:52.180683 sshd-session[4123]: pam_unix(sshd:session): session closed for user core May 15 09:39:52.184156 systemd[1]: sshd@19-10.0.0.103:22-10.0.0.1:60860.service: Deactivated successfully. May 15 09:39:52.186156 systemd[1]: session-20.scope: Deactivated successfully. May 15 09:39:52.188569 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. May 15 09:39:52.189446 systemd-logind[1424]: Removed session 20. May 15 09:39:57.197637 systemd[1]: Started sshd@20-10.0.0.103:22-10.0.0.1:37326.service - OpenSSH per-connection server daemon (10.0.0.1:37326). May 15 09:39:57.235629 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 37326 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:39:57.236864 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:39:57.240809 systemd-logind[1424]: New session 21 of user core. May 15 09:39:57.251185 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 09:39:57.356343 sshd[4141]: Connection closed by 10.0.0.1 port 37326 May 15 09:39:57.356731 sshd-session[4139]: pam_unix(sshd:session): session closed for user core May 15 09:39:57.359880 systemd[1]: sshd@20-10.0.0.103:22-10.0.0.1:37326.service: Deactivated successfully. May 15 09:39:57.362502 systemd[1]: session-21.scope: Deactivated successfully. May 15 09:39:57.363569 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. May 15 09:39:57.364444 systemd-logind[1424]: Removed session 21. May 15 09:40:02.366525 systemd[1]: Started sshd@21-10.0.0.103:22-10.0.0.1:37332.service - OpenSSH per-connection server daemon (10.0.0.1:37332). May 15 09:40:02.404427 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 37332 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:40:02.405518 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:40:02.409114 systemd-logind[1424]: New session 22 of user core. May 15 09:40:02.416183 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 09:40:02.519716 sshd[4158]: Connection closed by 10.0.0.1 port 37332 May 15 09:40:02.519590 sshd-session[4156]: pam_unix(sshd:session): session closed for user core May 15 09:40:02.531473 systemd[1]: sshd@21-10.0.0.103:22-10.0.0.1:37332.service: Deactivated successfully. May 15 09:40:02.532844 systemd[1]: session-22.scope: Deactivated successfully. May 15 09:40:02.534266 systemd-logind[1424]: Session 22 logged out. Waiting for processes to exit. May 15 09:40:02.543440 systemd[1]: Started sshd@22-10.0.0.103:22-10.0.0.1:53974.service - OpenSSH per-connection server daemon (10.0.0.1:53974). May 15 09:40:02.544601 systemd-logind[1424]: Removed session 22. May 15 09:40:02.577497 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 53974 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:40:02.578545 sshd-session[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:40:02.584673 systemd-logind[1424]: New session 23 of user core. May 15 09:40:02.589171 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 09:40:04.798374 containerd[1447]: time="2025-05-15T09:40:04.798318097Z" level=info msg="StopContainer for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" with timeout 30 (s)" May 15 09:40:04.799314 containerd[1447]: time="2025-05-15T09:40:04.799282336Z" level=info msg="Stop container \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" with signal terminated" May 15 09:40:04.810032 systemd[1]: cri-containerd-f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e.scope: Deactivated successfully. May 15 09:40:04.830847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e-rootfs.mount: Deactivated successfully. May 15 09:40:04.836870 containerd[1447]: time="2025-05-15T09:40:04.836811182Z" level=info msg="shim disconnected" id=f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e namespace=k8s.io May 15 09:40:04.836870 containerd[1447]: time="2025-05-15T09:40:04.836862340Z" level=warning msg="cleaning up after shim disconnected" id=f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e namespace=k8s.io May 15 09:40:04.836870 containerd[1447]: time="2025-05-15T09:40:04.836870459Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:04.845614 containerd[1447]: time="2025-05-15T09:40:04.845575410Z" level=info msg="StopContainer for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" with timeout 2 (s)" May 15 09:40:04.845921 containerd[1447]: time="2025-05-15T09:40:04.845839158Z" level=info msg="Stop container \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" with signal terminated" May 15 09:40:04.851258 systemd-networkd[1379]: lxc_health: Link DOWN May 15 09:40:04.851835 systemd-networkd[1379]: lxc_health: Lost carrier May 15 09:40:04.871501 containerd[1447]: time="2025-05-15T09:40:04.871156923Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 09:40:04.876004 systemd[1]: cri-containerd-339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886.scope: Deactivated successfully. May 15 09:40:04.876526 systemd[1]: cri-containerd-339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886.scope: Consumed 6.579s CPU time. May 15 09:40:04.893421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886-rootfs.mount: Deactivated successfully. May 15 09:40:04.899221 containerd[1447]: time="2025-05-15T09:40:04.899148054Z" level=info msg="StopContainer for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" returns successfully" May 15 09:40:04.900720 containerd[1447]: time="2025-05-15T09:40:04.900676509Z" level=info msg="shim disconnected" id=339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886 namespace=k8s.io May 15 09:40:04.900720 containerd[1447]: time="2025-05-15T09:40:04.900721628Z" level=warning msg="cleaning up after shim disconnected" id=339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886 namespace=k8s.io May 15 09:40:04.900839 containerd[1447]: time="2025-05-15T09:40:04.900729907Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:04.902207 containerd[1447]: time="2025-05-15T09:40:04.902132088Z" level=info msg="StopPodSandbox for \"09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4\"" May 15 09:40:04.906422 containerd[1447]: time="2025-05-15T09:40:04.906380147Z" level=info msg="Container to stop \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:40:04.908064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4-shm.mount: Deactivated successfully. May 15 09:40:04.914688 containerd[1447]: time="2025-05-15T09:40:04.914466684Z" level=info msg="StopContainer for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" returns successfully" May 15 09:40:04.914752 systemd[1]: cri-containerd-09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4.scope: Deactivated successfully. May 15 09:40:04.915656 containerd[1447]: time="2025-05-15T09:40:04.915440402Z" level=info msg="StopPodSandbox for \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\"" May 15 09:40:04.915656 containerd[1447]: time="2025-05-15T09:40:04.915480801Z" level=info msg="Container to stop \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:40:04.915656 containerd[1447]: time="2025-05-15T09:40:04.915492840Z" level=info msg="Container to stop \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:40:04.915656 containerd[1447]: time="2025-05-15T09:40:04.915503480Z" level=info msg="Container to stop \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:40:04.915656 containerd[1447]: time="2025-05-15T09:40:04.915512559Z" level=info msg="Container to stop \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:40:04.915656 containerd[1447]: time="2025-05-15T09:40:04.915521479Z" level=info msg="Container to stop \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:40:04.917237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518-shm.mount: Deactivated successfully. May 15 09:40:04.923602 systemd[1]: cri-containerd-dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518.scope: Deactivated successfully. May 15 09:40:04.941894 containerd[1447]: time="2025-05-15T09:40:04.941830482Z" level=info msg="shim disconnected" id=09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4 namespace=k8s.io May 15 09:40:04.941894 containerd[1447]: time="2025-05-15T09:40:04.941882799Z" level=warning msg="cleaning up after shim disconnected" id=09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4 namespace=k8s.io May 15 09:40:04.941894 containerd[1447]: time="2025-05-15T09:40:04.941895319Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:04.942383 containerd[1447]: time="2025-05-15T09:40:04.941866960Z" level=info msg="shim disconnected" id=dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518 namespace=k8s.io May 15 09:40:04.942383 containerd[1447]: time="2025-05-15T09:40:04.942114869Z" level=warning msg="cleaning up after shim disconnected" id=dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518 namespace=k8s.io May 15 09:40:04.942383 containerd[1447]: time="2025-05-15T09:40:04.942122149Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:04.952556 containerd[1447]: time="2025-05-15T09:40:04.952498028Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:40:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 09:40:04.954330 containerd[1447]: time="2025-05-15T09:40:04.954277993Z" level=info msg="TearDown network for sandbox \"09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4\" successfully" May 15 09:40:04.954330 containerd[1447]: time="2025-05-15T09:40:04.954307432Z" level=info msg="StopPodSandbox for \"09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4\" returns successfully" May 15 09:40:04.955330 containerd[1447]: time="2025-05-15T09:40:04.955078999Z" level=info msg="TearDown network for sandbox \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" successfully" May 15 09:40:04.955330 containerd[1447]: time="2025-05-15T09:40:04.955102918Z" level=info msg="StopPodSandbox for \"dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518\" returns successfully" May 15 09:40:05.058897 kubelet[2522]: I0515 09:40:05.058783 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5453131f-82d0-4da3-88ef-4f77543a406e-cilium-config-path\") pod \"5453131f-82d0-4da3-88ef-4f77543a406e\" (UID: \"5453131f-82d0-4da3-88ef-4f77543a406e\") " May 15 09:40:05.058897 kubelet[2522]: I0515 09:40:05.058823 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-kernel\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.058897 kubelet[2522]: I0515 09:40:05.058842 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-run\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.058897 kubelet[2522]: I0515 09:40:05.058857 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-xtables-lock\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.058897 kubelet[2522]: I0515 09:40:05.058875 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6nb8\" (UniqueName: \"kubernetes.io/projected/5453131f-82d0-4da3-88ef-4f77543a406e-kube-api-access-k6nb8\") pod \"5453131f-82d0-4da3-88ef-4f77543a406e\" (UID: \"5453131f-82d0-4da3-88ef-4f77543a406e\") " May 15 09:40:05.058897 kubelet[2522]: I0515 09:40:05.058890 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-net\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.059889 kubelet[2522]: I0515 09:40:05.058903 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-lib-modules\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.059889 kubelet[2522]: I0515 09:40:05.058918 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-cgroup\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.059889 kubelet[2522]: I0515 09:40:05.058936 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpz9k\" (UniqueName: \"kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-kube-api-access-bpz9k\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.059889 kubelet[2522]: I0515 09:40:05.058951 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-bpf-maps\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.059889 kubelet[2522]: I0515 09:40:05.058965 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-hostproc\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.059889 kubelet[2522]: I0515 09:40:05.058978 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-etc-cni-netd\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.060022 kubelet[2522]: I0515 09:40:05.058992 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cni-path\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.060022 kubelet[2522]: I0515 09:40:05.059014 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-hubble-tls\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.060022 kubelet[2522]: I0515 09:40:05.059032 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25969c07-3261-434d-8148-b4b33fcf9687-clustermesh-secrets\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.060022 kubelet[2522]: I0515 09:40:05.059072 2522 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25969c07-3261-434d-8148-b4b33fcf9687-cilium-config-path\") pod \"25969c07-3261-434d-8148-b4b33fcf9687\" (UID: \"25969c07-3261-434d-8148-b4b33fcf9687\") " May 15 09:40:05.062766 kubelet[2522]: I0515 09:40:05.062736 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.063124 kubelet[2522]: I0515 09:40:05.062751 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.063381 kubelet[2522]: I0515 09:40:05.063319 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.063417 kubelet[2522]: I0515 09:40:05.063386 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.063417 kubelet[2522]: I0515 09:40:05.063405 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.065139 kubelet[2522]: I0515 09:40:05.064979 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25969c07-3261-434d-8148-b4b33fcf9687-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 09:40:05.065139 kubelet[2522]: I0515 09:40:05.065038 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cni-path" (OuterVolumeSpecName: "cni-path") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.065139 kubelet[2522]: I0515 09:40:05.065033 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5453131f-82d0-4da3-88ef-4f77543a406e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5453131f-82d0-4da3-88ef-4f77543a406e" (UID: "5453131f-82d0-4da3-88ef-4f77543a406e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 09:40:05.065139 kubelet[2522]: I0515 09:40:05.065081 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.065139 kubelet[2522]: I0515 09:40:05.065108 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.065958 kubelet[2522]: I0515 09:40:05.065869 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5453131f-82d0-4da3-88ef-4f77543a406e-kube-api-access-k6nb8" (OuterVolumeSpecName: "kube-api-access-k6nb8") pod "5453131f-82d0-4da3-88ef-4f77543a406e" (UID: "5453131f-82d0-4da3-88ef-4f77543a406e"). InnerVolumeSpecName "kube-api-access-k6nb8". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 09:40:05.065958 kubelet[2522]: I0515 09:40:05.065918 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.065958 kubelet[2522]: I0515 09:40:05.065936 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-hostproc" (OuterVolumeSpecName: "hostproc") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 09:40:05.066400 kubelet[2522]: I0515 09:40:05.066367 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-kube-api-access-bpz9k" (OuterVolumeSpecName: "kube-api-access-bpz9k") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "kube-api-access-bpz9k". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 09:40:05.066978 kubelet[2522]: I0515 09:40:05.066957 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 09:40:05.068953 kubelet[2522]: I0515 09:40:05.068923 2522 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25969c07-3261-434d-8148-b4b33fcf9687-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "25969c07-3261-434d-8148-b4b33fcf9687" (UID: "25969c07-3261-434d-8148-b4b33fcf9687"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159768 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5453131f-82d0-4da3-88ef-4f77543a406e-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159792 2522 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159801 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159809 2522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k6nb8\" (UniqueName: \"kubernetes.io/projected/5453131f-82d0-4da3-88ef-4f77543a406e-kube-api-access-k6nb8\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159817 2522 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159824 2522 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159832 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.159847 kubelet[2522]: I0515 09:40:05.159839 2522 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159847 2522 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bpz9k\" (UniqueName: \"kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-kube-api-access-bpz9k\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159855 2522 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159863 2522 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159871 2522 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159898 2522 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/25969c07-3261-434d-8148-b4b33fcf9687-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159905 2522 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/25969c07-3261-434d-8148-b4b33fcf9687-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159920 2522 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/25969c07-3261-434d-8148-b4b33fcf9687-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.160105 kubelet[2522]: I0515 09:40:05.159930 2522 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25969c07-3261-434d-8148-b4b33fcf9687-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 09:40:05.626987 kubelet[2522]: I0515 09:40:05.626962 2522 scope.go:117] "RemoveContainer" containerID="f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e" May 15 09:40:05.629013 containerd[1447]: time="2025-05-15T09:40:05.628924711Z" level=info msg="RemoveContainer for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\"" May 15 09:40:05.633258 systemd[1]: Removed slice kubepods-besteffort-pod5453131f_82d0_4da3_88ef_4f77543a406e.slice - libcontainer container kubepods-besteffort-pod5453131f_82d0_4da3_88ef_4f77543a406e.slice. May 15 09:40:05.635112 containerd[1447]: time="2025-05-15T09:40:05.634968909Z" level=info msg="RemoveContainer for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" returns successfully" May 15 09:40:05.635426 kubelet[2522]: I0515 09:40:05.635404 2522 scope.go:117] "RemoveContainer" containerID="f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e" May 15 09:40:05.636092 containerd[1447]: time="2025-05-15T09:40:05.636029587Z" level=error msg="ContainerStatus for \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\": not found" May 15 09:40:05.636403 kubelet[2522]: E0515 09:40:05.636377 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\": not found" containerID="f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e" May 15 09:40:05.636523 kubelet[2522]: I0515 09:40:05.636414 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e"} err="failed to get container status \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1b225674a8b0d440aa6e532235ff44ffa480420c02e74e53d91620a663b804e\": not found" May 15 09:40:05.636523 kubelet[2522]: I0515 09:40:05.636501 2522 scope.go:117] "RemoveContainer" containerID="339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886" May 15 09:40:05.638449 containerd[1447]: time="2025-05-15T09:40:05.637556966Z" level=info msg="RemoveContainer for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\"" May 15 09:40:05.638205 systemd[1]: Removed slice kubepods-burstable-pod25969c07_3261_434d_8148_b4b33fcf9687.slice - libcontainer container kubepods-burstable-pod25969c07_3261_434d_8148_b4b33fcf9687.slice. May 15 09:40:05.638289 systemd[1]: kubepods-burstable-pod25969c07_3261_434d_8148_b4b33fcf9687.slice: Consumed 6.739s CPU time. May 15 09:40:05.642059 containerd[1447]: time="2025-05-15T09:40:05.640720479Z" level=info msg="RemoveContainer for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" returns successfully" May 15 09:40:05.642127 kubelet[2522]: I0515 09:40:05.641440 2522 scope.go:117] "RemoveContainer" containerID="9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3" May 15 09:40:05.642356 containerd[1447]: time="2025-05-15T09:40:05.642313175Z" level=info msg="RemoveContainer for \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\"" May 15 09:40:05.646540 containerd[1447]: time="2025-05-15T09:40:05.646473449Z" level=info msg="RemoveContainer for \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\" returns successfully" May 15 09:40:05.646824 kubelet[2522]: I0515 09:40:05.646660 2522 scope.go:117] "RemoveContainer" containerID="7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989" May 15 09:40:05.647744 containerd[1447]: time="2025-05-15T09:40:05.647717799Z" level=info msg="RemoveContainer for \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\"" May 15 09:40:05.650663 containerd[1447]: time="2025-05-15T09:40:05.650615243Z" level=info msg="RemoveContainer for \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\" returns successfully" May 15 09:40:05.650869 kubelet[2522]: I0515 09:40:05.650802 2522 scope.go:117] "RemoveContainer" containerID="a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2" May 15 09:40:05.652686 containerd[1447]: time="2025-05-15T09:40:05.652594404Z" level=info msg="RemoveContainer for \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\"" May 15 09:40:05.658033 containerd[1447]: time="2025-05-15T09:40:05.656721758Z" level=info msg="RemoveContainer for \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\" returns successfully" May 15 09:40:05.658315 kubelet[2522]: I0515 09:40:05.658267 2522 scope.go:117] "RemoveContainer" containerID="da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48" May 15 09:40:05.659725 containerd[1447]: time="2025-05-15T09:40:05.659695319Z" level=info msg="RemoveContainer for \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\"" May 15 09:40:05.662887 containerd[1447]: time="2025-05-15T09:40:05.662846673Z" level=info msg="RemoveContainer for \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\" returns successfully" May 15 09:40:05.663127 kubelet[2522]: I0515 09:40:05.663025 2522 scope.go:117] "RemoveContainer" containerID="339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886" May 15 09:40:05.663265 containerd[1447]: time="2025-05-15T09:40:05.663220018Z" level=error msg="ContainerStatus for \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\": not found" May 15 09:40:05.663364 kubelet[2522]: E0515 09:40:05.663336 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\": not found" containerID="339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886" May 15 09:40:05.663405 kubelet[2522]: I0515 09:40:05.663367 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886"} err="failed to get container status \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\": rpc error: code = NotFound desc = an error occurred when try to find container \"339f51521ae8cabade2deee27601bc192035e0d4a70ea4bd33adbf7ff8222886\": not found" May 15 09:40:05.663405 kubelet[2522]: I0515 09:40:05.663387 2522 scope.go:117] "RemoveContainer" containerID="9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3" May 15 09:40:05.663597 containerd[1447]: time="2025-05-15T09:40:05.663563605Z" level=error msg="ContainerStatus for \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\": not found" May 15 09:40:05.663671 kubelet[2522]: E0515 09:40:05.663655 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\": not found" containerID="9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3" May 15 09:40:05.663700 kubelet[2522]: I0515 09:40:05.663673 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3"} err="failed to get container status \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9bfbcb16a3120523a96d1326e043c629438f30ac149d6a1bce13b884d80696f3\": not found" May 15 09:40:05.663700 kubelet[2522]: I0515 09:40:05.663686 2522 scope.go:117] "RemoveContainer" containerID="7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989" May 15 09:40:05.663802 containerd[1447]: time="2025-05-15T09:40:05.663782356Z" level=error msg="ContainerStatus for \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\": not found" May 15 09:40:05.663875 kubelet[2522]: E0515 09:40:05.663859 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\": not found" containerID="7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989" May 15 09:40:05.663920 kubelet[2522]: I0515 09:40:05.663881 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989"} err="failed to get container status \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a6bdcbe707a93d64f04337175830bcee4bc903084b9b876c27496ee01d04989\": not found" May 15 09:40:05.663920 kubelet[2522]: I0515 09:40:05.663919 2522 scope.go:117] "RemoveContainer" containerID="a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2" May 15 09:40:05.664039 containerd[1447]: time="2025-05-15T09:40:05.664018066Z" level=error msg="ContainerStatus for \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\": not found" May 15 09:40:05.664281 kubelet[2522]: E0515 09:40:05.664159 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\": not found" containerID="a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2" May 15 09:40:05.664281 kubelet[2522]: I0515 09:40:05.664200 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2"} err="failed to get container status \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7e938ac0690e49ff150313531633f0af498bd0b705aa59e8e628202c055e0d2\": not found" May 15 09:40:05.664281 kubelet[2522]: I0515 09:40:05.664215 2522 scope.go:117] "RemoveContainer" containerID="da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48" May 15 09:40:05.664375 containerd[1447]: time="2025-05-15T09:40:05.664331694Z" level=error msg="ContainerStatus for \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\": not found" May 15 09:40:05.664428 kubelet[2522]: E0515 09:40:05.664408 2522 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\": not found" containerID="da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48" May 15 09:40:05.664470 kubelet[2522]: I0515 09:40:05.664428 2522 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48"} err="failed to get container status \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\": rpc error: code = NotFound desc = an error occurred when try to find container \"da2979f080751e78ab7a1969eb069c81a091fffdbd5c84047a191521c3084b48\": not found" May 15 09:40:05.826796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09dc9b6de068f00482c241261a1aea16b3331a34d1abc1ee28463b5d707839f4-rootfs.mount: Deactivated successfully. May 15 09:40:05.826896 systemd[1]: var-lib-kubelet-pods-5453131f\x2d82d0\x2d4da3\x2d88ef\x2d4f77543a406e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6nb8.mount: Deactivated successfully. May 15 09:40:05.826950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcc8361493c17535e4503f200d4d04f6900078c8bd2a8455dee4bdfc99956518-rootfs.mount: Deactivated successfully. May 15 09:40:05.826997 systemd[1]: var-lib-kubelet-pods-25969c07\x2d3261\x2d434d\x2d8148\x2db4b33fcf9687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbpz9k.mount: Deactivated successfully. May 15 09:40:05.827070 systemd[1]: var-lib-kubelet-pods-25969c07\x2d3261\x2d434d\x2d8148\x2db4b33fcf9687-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 09:40:05.827127 systemd[1]: var-lib-kubelet-pods-25969c07\x2d3261\x2d434d\x2d8148\x2db4b33fcf9687-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 09:40:06.433611 kubelet[2522]: I0515 09:40:06.433562 2522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25969c07-3261-434d-8148-b4b33fcf9687" path="/var/lib/kubelet/pods/25969c07-3261-434d-8148-b4b33fcf9687/volumes" May 15 09:40:06.434151 kubelet[2522]: I0515 09:40:06.434111 2522 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5453131f-82d0-4da3-88ef-4f77543a406e" path="/var/lib/kubelet/pods/5453131f-82d0-4da3-88ef-4f77543a406e/volumes" May 15 09:40:06.764266 sshd[4172]: Connection closed by 10.0.0.1 port 53974 May 15 09:40:06.765072 sshd-session[4170]: pam_unix(sshd:session): session closed for user core May 15 09:40:06.772618 systemd[1]: sshd@22-10.0.0.103:22-10.0.0.1:53974.service: Deactivated successfully. May 15 09:40:06.774608 systemd[1]: session-23.scope: Deactivated successfully. May 15 09:40:06.775017 systemd[1]: session-23.scope: Consumed 1.552s CPU time. May 15 09:40:06.777224 systemd-logind[1424]: Session 23 logged out. Waiting for processes to exit. May 15 09:40:06.778689 systemd[1]: Started sshd@23-10.0.0.103:22-10.0.0.1:53978.service - OpenSSH per-connection server daemon (10.0.0.1:53978). May 15 09:40:06.779513 systemd-logind[1424]: Removed session 23. May 15 09:40:06.821928 sshd[4330]: Accepted publickey for core from 10.0.0.1 port 53978 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:40:06.823135 sshd-session[4330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:40:06.826645 systemd-logind[1424]: New session 24 of user core. May 15 09:40:06.840222 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 09:40:07.468480 kubelet[2522]: E0515 09:40:07.468439 2522 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:40:08.747016 sshd[4332]: Connection closed by 10.0.0.1 port 53978 May 15 09:40:08.747576 sshd-session[4330]: pam_unix(sshd:session): session closed for user core May 15 09:40:08.757543 systemd[1]: sshd@23-10.0.0.103:22-10.0.0.1:53978.service: Deactivated successfully. May 15 09:40:08.760909 systemd[1]: session-24.scope: Deactivated successfully. May 15 09:40:08.761438 systemd[1]: session-24.scope: Consumed 1.824s CPU time. May 15 09:40:08.764155 systemd-logind[1424]: Session 24 logged out. Waiting for processes to exit. May 15 09:40:08.767184 kubelet[2522]: I0515 09:40:08.765776 2522 memory_manager.go:355] "RemoveStaleState removing state" podUID="25969c07-3261-434d-8148-b4b33fcf9687" containerName="cilium-agent" May 15 09:40:08.767184 kubelet[2522]: I0515 09:40:08.765802 2522 memory_manager.go:355] "RemoveStaleState removing state" podUID="5453131f-82d0-4da3-88ef-4f77543a406e" containerName="cilium-operator" May 15 09:40:08.779875 systemd[1]: Started sshd@24-10.0.0.103:22-10.0.0.1:53992.service - OpenSSH per-connection server daemon (10.0.0.1:53992). May 15 09:40:08.788280 systemd-logind[1424]: Removed session 24. May 15 09:40:08.797962 systemd[1]: Created slice kubepods-burstable-pod5ec2b1c3_12db_45db_845b_31e74d63e1e1.slice - libcontainer container kubepods-burstable-pod5ec2b1c3_12db_45db_845b_31e74d63e1e1.slice. May 15 09:40:08.827659 sshd[4343]: Accepted publickey for core from 10.0.0.1 port 53992 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:40:08.829024 sshd-session[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:40:08.832757 systemd-logind[1424]: New session 25 of user core. May 15 09:40:08.851272 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 09:40:08.878309 kubelet[2522]: I0515 09:40:08.878208 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-cilium-cgroup\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878309 kubelet[2522]: I0515 09:40:08.878313 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5ec2b1c3-12db-45db-845b-31e74d63e1e1-cilium-config-path\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878474 kubelet[2522]: I0515 09:40:08.878336 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-bpf-maps\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878474 kubelet[2522]: I0515 09:40:08.878354 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-hostproc\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878474 kubelet[2522]: I0515 09:40:08.878403 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5ec2b1c3-12db-45db-845b-31e74d63e1e1-clustermesh-secrets\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878474 kubelet[2522]: I0515 09:40:08.878418 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-host-proc-sys-net\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878474 kubelet[2522]: I0515 09:40:08.878432 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-cilium-run\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878474 kubelet[2522]: I0515 09:40:08.878446 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-etc-cni-netd\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878601 kubelet[2522]: I0515 09:40:08.878488 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-lib-modules\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878601 kubelet[2522]: I0515 09:40:08.878503 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-xtables-lock\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878601 kubelet[2522]: I0515 09:40:08.878518 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-host-proc-sys-kernel\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878601 kubelet[2522]: I0515 09:40:08.878554 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5ec2b1c3-12db-45db-845b-31e74d63e1e1-hubble-tls\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878601 kubelet[2522]: I0515 09:40:08.878571 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5ec2b1c3-12db-45db-845b-31e74d63e1e1-cni-path\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878695 kubelet[2522]: I0515 09:40:08.878596 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5ec2b1c3-12db-45db-845b-31e74d63e1e1-cilium-ipsec-secrets\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.878695 kubelet[2522]: I0515 09:40:08.878641 2522 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv4cp\" (UniqueName: \"kubernetes.io/projected/5ec2b1c3-12db-45db-845b-31e74d63e1e1-kube-api-access-tv4cp\") pod \"cilium-w5nf5\" (UID: \"5ec2b1c3-12db-45db-845b-31e74d63e1e1\") " pod="kube-system/cilium-w5nf5" May 15 09:40:08.900737 sshd[4345]: Connection closed by 10.0.0.1 port 53992 May 15 09:40:08.901103 sshd-session[4343]: pam_unix(sshd:session): session closed for user core May 15 09:40:08.910937 systemd[1]: sshd@24-10.0.0.103:22-10.0.0.1:53992.service: Deactivated successfully. May 15 09:40:08.912672 systemd[1]: session-25.scope: Deactivated successfully. May 15 09:40:08.915861 systemd-logind[1424]: Session 25 logged out. Waiting for processes to exit. May 15 09:40:08.929387 systemd[1]: Started sshd@25-10.0.0.103:22-10.0.0.1:53994.service - OpenSSH per-connection server daemon (10.0.0.1:53994). May 15 09:40:08.930361 systemd-logind[1424]: Removed session 25. May 15 09:40:08.965485 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 53994 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:40:08.966737 sshd-session[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:40:08.971251 systemd-logind[1424]: New session 26 of user core. May 15 09:40:08.980914 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 09:40:09.109491 kubelet[2522]: E0515 09:40:09.109441 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:09.110094 containerd[1447]: time="2025-05-15T09:40:09.110003827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5nf5,Uid:5ec2b1c3-12db-45db-845b-31e74d63e1e1,Namespace:kube-system,Attempt:0,}" May 15 09:40:09.142424 containerd[1447]: time="2025-05-15T09:40:09.142312545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:40:09.142424 containerd[1447]: time="2025-05-15T09:40:09.142388543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:40:09.142424 containerd[1447]: time="2025-05-15T09:40:09.142400343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:40:09.142574 containerd[1447]: time="2025-05-15T09:40:09.142488940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:40:09.164268 systemd[1]: Started cri-containerd-d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1.scope - libcontainer container d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1. May 15 09:40:09.189560 containerd[1447]: time="2025-05-15T09:40:09.189512922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w5nf5,Uid:5ec2b1c3-12db-45db-845b-31e74d63e1e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\"" May 15 09:40:09.190616 kubelet[2522]: E0515 09:40:09.190584 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:09.194299 containerd[1447]: time="2025-05-15T09:40:09.194251375Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:40:09.207194 containerd[1447]: time="2025-05-15T09:40:09.207063417Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f\"" May 15 09:40:09.207596 containerd[1447]: time="2025-05-15T09:40:09.207557242Z" level=info msg="StartContainer for \"704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f\"" May 15 09:40:09.238292 systemd[1]: Started cri-containerd-704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f.scope - libcontainer container 704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f. May 15 09:40:09.272187 containerd[1447]: time="2025-05-15T09:40:09.272136879Z" level=info msg="StartContainer for \"704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f\" returns successfully" May 15 09:40:09.289150 systemd[1]: cri-containerd-704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f.scope: Deactivated successfully. May 15 09:40:09.316740 containerd[1447]: time="2025-05-15T09:40:09.316673098Z" level=info msg="shim disconnected" id=704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f namespace=k8s.io May 15 09:40:09.316740 containerd[1447]: time="2025-05-15T09:40:09.316732216Z" level=warning msg="cleaning up after shim disconnected" id=704020b24bbca40f0a4185ac47a9de071aab1d5dd17f4cd1b2c71ffa8460b96f namespace=k8s.io May 15 09:40:09.316740 containerd[1447]: time="2025-05-15T09:40:09.316740696Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:09.643020 kubelet[2522]: E0515 09:40:09.642981 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:09.645839 containerd[1447]: time="2025-05-15T09:40:09.645604737Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:40:09.656957 containerd[1447]: time="2025-05-15T09:40:09.656864108Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733\"" May 15 09:40:09.659766 containerd[1447]: time="2025-05-15T09:40:09.657458490Z" level=info msg="StartContainer for \"5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733\"" May 15 09:40:09.687330 systemd[1]: Started cri-containerd-5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733.scope - libcontainer container 5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733. May 15 09:40:09.709260 containerd[1447]: time="2025-05-15T09:40:09.709214645Z" level=info msg="StartContainer for \"5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733\" returns successfully" May 15 09:40:09.718295 systemd[1]: cri-containerd-5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733.scope: Deactivated successfully. May 15 09:40:09.752687 containerd[1447]: time="2025-05-15T09:40:09.752602419Z" level=info msg="shim disconnected" id=5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733 namespace=k8s.io May 15 09:40:09.752687 containerd[1447]: time="2025-05-15T09:40:09.752679697Z" level=warning msg="cleaning up after shim disconnected" id=5c3057d94c9d65c0613a842270259216555c16a470733b4adbdaac160242f733 namespace=k8s.io May 15 09:40:09.752687 containerd[1447]: time="2025-05-15T09:40:09.752688657Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:10.646287 kubelet[2522]: E0515 09:40:10.646240 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:10.651589 containerd[1447]: time="2025-05-15T09:40:10.650388525Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:40:10.667308 containerd[1447]: time="2025-05-15T09:40:10.667252317Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99\"" May 15 09:40:10.667734 containerd[1447]: time="2025-05-15T09:40:10.667699304Z" level=info msg="StartContainer for \"3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99\"" May 15 09:40:10.701264 systemd[1]: Started cri-containerd-3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99.scope - libcontainer container 3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99. May 15 09:40:10.725966 containerd[1447]: time="2025-05-15T09:40:10.725919900Z" level=info msg="StartContainer for \"3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99\" returns successfully" May 15 09:40:10.727847 systemd[1]: cri-containerd-3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99.scope: Deactivated successfully. May 15 09:40:10.750947 containerd[1447]: time="2025-05-15T09:40:10.750873578Z" level=info msg="shim disconnected" id=3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99 namespace=k8s.io May 15 09:40:10.750947 containerd[1447]: time="2025-05-15T09:40:10.750938936Z" level=warning msg="cleaning up after shim disconnected" id=3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99 namespace=k8s.io May 15 09:40:10.750947 containerd[1447]: time="2025-05-15T09:40:10.750947216Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:10.983181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf49e8a7208fa24b3449a8140c84b1741c3ec54ec070255eb04f67f0f02df99-rootfs.mount: Deactivated successfully. May 15 09:40:11.431105 kubelet[2522]: E0515 09:40:11.430992 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:11.431105 kubelet[2522]: E0515 09:40:11.431010 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:11.431306 kubelet[2522]: E0515 09:40:11.431227 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:11.650297 kubelet[2522]: E0515 09:40:11.650249 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:11.653719 containerd[1447]: time="2025-05-15T09:40:11.653589610Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:40:11.673703 containerd[1447]: time="2025-05-15T09:40:11.673640270Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac\"" May 15 09:40:11.674442 containerd[1447]: time="2025-05-15T09:40:11.674406089Z" level=info msg="StartContainer for \"f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac\"" May 15 09:40:11.713273 systemd[1]: Started cri-containerd-f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac.scope - libcontainer container f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac. May 15 09:40:11.733663 systemd[1]: cri-containerd-f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac.scope: Deactivated successfully. May 15 09:40:11.735842 containerd[1447]: time="2025-05-15T09:40:11.735798117Z" level=info msg="StartContainer for \"f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac\" returns successfully" May 15 09:40:11.757055 containerd[1447]: time="2025-05-15T09:40:11.756979587Z" level=info msg="shim disconnected" id=f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac namespace=k8s.io May 15 09:40:11.757055 containerd[1447]: time="2025-05-15T09:40:11.757042665Z" level=warning msg="cleaning up after shim disconnected" id=f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac namespace=k8s.io May 15 09:40:11.757261 containerd[1447]: time="2025-05-15T09:40:11.757064345Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:40:11.983236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f5f532999f97b04dc6f156825ac827eb657c82e466c52321085b75f2e7593fac-rootfs.mount: Deactivated successfully. May 15 09:40:12.469418 kubelet[2522]: E0515 09:40:12.469363 2522 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:40:12.654211 kubelet[2522]: E0515 09:40:12.654174 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:12.656893 containerd[1447]: time="2025-05-15T09:40:12.656674170Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:40:12.671807 containerd[1447]: time="2025-05-15T09:40:12.671676276Z" level=info msg="CreateContainer within sandbox \"d5283c20ef5d9875afa9676713c10ea792a67818a97d2be864ddf01205eb05a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7406662fa4264b4c636efa9576af2ca39dc9f2527f8aea6906fa10c68a8951e8\"" May 15 09:40:12.672974 containerd[1447]: time="2025-05-15T09:40:12.672909205Z" level=info msg="StartContainer for \"7406662fa4264b4c636efa9576af2ca39dc9f2527f8aea6906fa10c68a8951e8\"" May 15 09:40:12.702244 systemd[1]: Started cri-containerd-7406662fa4264b4c636efa9576af2ca39dc9f2527f8aea6906fa10c68a8951e8.scope - libcontainer container 7406662fa4264b4c636efa9576af2ca39dc9f2527f8aea6906fa10c68a8951e8. May 15 09:40:12.731140 containerd[1447]: time="2025-05-15T09:40:12.731001555Z" level=info msg="StartContainer for \"7406662fa4264b4c636efa9576af2ca39dc9f2527f8aea6906fa10c68a8951e8\" returns successfully" May 15 09:40:13.011093 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 09:40:13.659268 kubelet[2522]: E0515 09:40:13.659224 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:14.347668 kubelet[2522]: I0515 09:40:14.347611 2522 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T09:40:14Z","lastTransitionTime":"2025-05-15T09:40:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 09:40:15.110226 kubelet[2522]: E0515 09:40:15.110181 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:15.860813 systemd-networkd[1379]: lxc_health: Link UP May 15 09:40:15.870224 systemd-networkd[1379]: lxc_health: Gained carrier May 15 09:40:17.112121 kubelet[2522]: E0515 09:40:17.111859 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:17.134442 kubelet[2522]: I0515 09:40:17.133650 2522 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w5nf5" podStartSLOduration=9.133630419 podStartE2EDuration="9.133630419s" podCreationTimestamp="2025-05-15 09:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:40:13.675298861 +0000 UTC m=+81.356584093" watchObservedRunningTime="2025-05-15 09:40:17.133630419 +0000 UTC m=+84.814915651" May 15 09:40:17.665789 kubelet[2522]: E0515 09:40:17.665730 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:17.891194 systemd-networkd[1379]: lxc_health: Gained IPv6LL May 15 09:40:18.667980 kubelet[2522]: E0515 09:40:18.667940 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:19.431315 kubelet[2522]: E0515 09:40:19.431283 2522 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:40:21.892965 sshd[4356]: Connection closed by 10.0.0.1 port 53994 May 15 09:40:21.893488 sshd-session[4351]: pam_unix(sshd:session): session closed for user core May 15 09:40:21.897774 systemd[1]: sshd@25-10.0.0.103:22-10.0.0.1:53994.service: Deactivated successfully. May 15 09:40:21.899572 systemd[1]: session-26.scope: Deactivated successfully. May 15 09:40:21.900217 systemd-logind[1424]: Session 26 logged out. Waiting for processes to exit. May 15 09:40:21.901441 systemd-logind[1424]: Removed session 26.