May 15 09:18:00.932413 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 09:18:00.932433 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 08:06:05 -00 2025 May 15 09:18:00.932443 kernel: KASLR enabled May 15 09:18:00.932449 kernel: efi: EFI v2.7 by EDK II May 15 09:18:00.932455 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 15 09:18:00.932460 kernel: random: crng init done May 15 09:18:00.932467 kernel: secureboot: Secure boot disabled May 15 09:18:00.932473 kernel: ACPI: Early table checksum verification disabled May 15 09:18:00.932479 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 09:18:00.932487 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 09:18:00.932493 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932499 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932505 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932511 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932519 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932527 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932533 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932540 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932546 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 09:18:00.932552 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 09:18:00.932559 kernel: NUMA: Failed to initialise from firmware May 15 09:18:00.932565 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:18:00.932572 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 09:18:00.932578 kernel: Zone ranges: May 15 09:18:00.932586 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:18:00.932599 kernel: DMA32 empty May 15 09:18:00.932606 kernel: Normal empty May 15 09:18:00.932612 kernel: Movable zone start for each node May 15 09:18:00.932619 kernel: Early memory node ranges May 15 09:18:00.932625 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 09:18:00.932631 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 09:18:00.932638 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 09:18:00.932644 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 09:18:00.932650 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 09:18:00.932657 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 09:18:00.932663 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 09:18:00.932669 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 09:18:00.932677 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 09:18:00.932684 kernel: psci: probing for conduit method from ACPI. May 15 09:18:00.932690 kernel: psci: PSCIv1.1 detected in firmware. May 15 09:18:00.932699 kernel: psci: Using standard PSCI v0.2 function IDs May 15 09:18:00.932706 kernel: psci: Trusted OS migration not required May 15 09:18:00.932713 kernel: psci: SMC Calling Convention v1.1 May 15 09:18:00.932721 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 09:18:00.932728 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 09:18:00.932735 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 09:18:00.932742 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 09:18:00.932749 kernel: Detected PIPT I-cache on CPU0 May 15 09:18:00.932756 kernel: CPU features: detected: GIC system register CPU interface May 15 09:18:00.932762 kernel: CPU features: detected: Hardware dirty bit management May 15 09:18:00.932769 kernel: CPU features: detected: Spectre-v4 May 15 09:18:00.932776 kernel: CPU features: detected: Spectre-BHB May 15 09:18:00.932782 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 09:18:00.932790 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 09:18:00.932797 kernel: CPU features: detected: ARM erratum 1418040 May 15 09:18:00.932804 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 09:18:00.932810 kernel: alternatives: applying boot alternatives May 15 09:18:00.932818 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d0dcc1a3c20c0187ebc71aef3b6915c891fced8fde4a46120a0dd669765b171b May 15 09:18:00.932826 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 09:18:00.932832 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 09:18:00.932840 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 09:18:00.932846 kernel: Fallback order for Node 0: 0 May 15 09:18:00.932853 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 09:18:00.932860 kernel: Policy zone: DMA May 15 09:18:00.932868 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 09:18:00.932875 kernel: software IO TLB: area num 4. May 15 09:18:00.932881 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 09:18:00.932888 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 15 09:18:00.932895 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 09:18:00.932902 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 09:18:00.932909 kernel: rcu: RCU event tracing is enabled. May 15 09:18:00.932916 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 09:18:00.932923 kernel: Trampoline variant of Tasks RCU enabled. May 15 09:18:00.932930 kernel: Tracing variant of Tasks RCU enabled. May 15 09:18:00.932936 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 09:18:00.932943 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 09:18:00.932951 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 09:18:00.932958 kernel: GICv3: 256 SPIs implemented May 15 09:18:00.932964 kernel: GICv3: 0 Extended SPIs implemented May 15 09:18:00.932971 kernel: Root IRQ handler: gic_handle_irq May 15 09:18:00.932977 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 09:18:00.932984 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 09:18:00.932991 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 09:18:00.932998 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 09:18:00.933005 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 09:18:00.933012 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 09:18:00.933018 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 09:18:00.933026 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 09:18:00.933033 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:18:00.933040 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 09:18:00.933047 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 09:18:00.933053 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 09:18:00.933060 kernel: arm-pv: using stolen time PV May 15 09:18:00.933067 kernel: Console: colour dummy device 80x25 May 15 09:18:00.933074 kernel: ACPI: Core revision 20230628 May 15 09:18:00.933081 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 09:18:00.933088 kernel: pid_max: default: 32768 minimum: 301 May 15 09:18:00.933096 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 09:18:00.933103 kernel: landlock: Up and running. May 15 09:18:00.933110 kernel: SELinux: Initializing. May 15 09:18:00.933118 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 09:18:00.933125 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 09:18:00.933136 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 09:18:00.933151 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 09:18:00.933158 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 09:18:00.933165 kernel: rcu: Hierarchical SRCU implementation. May 15 09:18:00.933174 kernel: rcu: Max phase no-delay instances is 400. May 15 09:18:00.933181 kernel: Platform MSI: ITS@0x8080000 domain created May 15 09:18:00.933188 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 09:18:00.933195 kernel: Remapping and enabling EFI services. May 15 09:18:00.933201 kernel: smp: Bringing up secondary CPUs ... May 15 09:18:00.933208 kernel: Detected PIPT I-cache on CPU1 May 15 09:18:00.933215 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 09:18:00.933222 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 09:18:00.933229 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:18:00.933236 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 09:18:00.933245 kernel: Detected PIPT I-cache on CPU2 May 15 09:18:00.933252 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 09:18:00.933264 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 09:18:00.933272 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:18:00.933279 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 09:18:00.933286 kernel: Detected PIPT I-cache on CPU3 May 15 09:18:00.933294 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 09:18:00.933301 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 09:18:00.933308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 09:18:00.933316 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 09:18:00.933325 kernel: smp: Brought up 1 node, 4 CPUs May 15 09:18:00.933332 kernel: SMP: Total of 4 processors activated. May 15 09:18:00.933339 kernel: CPU features: detected: 32-bit EL0 Support May 15 09:18:00.933347 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 09:18:00.933354 kernel: CPU features: detected: Common not Private translations May 15 09:18:00.933361 kernel: CPU features: detected: CRC32 instructions May 15 09:18:00.933369 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 09:18:00.933377 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 09:18:00.933384 kernel: CPU features: detected: LSE atomic instructions May 15 09:18:00.933392 kernel: CPU features: detected: Privileged Access Never May 15 09:18:00.933399 kernel: CPU features: detected: RAS Extension Support May 15 09:18:00.933406 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 09:18:00.933413 kernel: CPU: All CPU(s) started at EL1 May 15 09:18:00.933420 kernel: alternatives: applying system-wide alternatives May 15 09:18:00.933428 kernel: devtmpfs: initialized May 15 09:18:00.933438 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 09:18:00.933448 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 09:18:00.933455 kernel: pinctrl core: initialized pinctrl subsystem May 15 09:18:00.933462 kernel: SMBIOS 3.0.0 present. May 15 09:18:00.933469 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 09:18:00.933477 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 09:18:00.933484 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 09:18:00.933491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 09:18:00.933499 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 09:18:00.933506 kernel: audit: initializing netlink subsys (disabled) May 15 09:18:00.933515 kernel: audit: type=2000 audit(0.026:1): state=initialized audit_enabled=0 res=1 May 15 09:18:00.933522 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 09:18:00.933529 kernel: cpuidle: using governor menu May 15 09:18:00.933537 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 09:18:00.933544 kernel: ASID allocator initialised with 32768 entries May 15 09:18:00.933551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 09:18:00.933558 kernel: Serial: AMBA PL011 UART driver May 15 09:18:00.933566 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 09:18:00.933573 kernel: Modules: 0 pages in range for non-PLT usage May 15 09:18:00.933582 kernel: Modules: 508944 pages in range for PLT usage May 15 09:18:00.933589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 09:18:00.933596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 09:18:00.933604 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 09:18:00.933611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 09:18:00.933618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 09:18:00.933625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 09:18:00.933632 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 09:18:00.933639 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 09:18:00.933648 kernel: ACPI: Added _OSI(Module Device) May 15 09:18:00.933655 kernel: ACPI: Added _OSI(Processor Device) May 15 09:18:00.933662 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 09:18:00.933669 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 09:18:00.933677 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 09:18:00.933684 kernel: ACPI: Interpreter enabled May 15 09:18:00.933691 kernel: ACPI: Using GIC for interrupt routing May 15 09:18:00.933698 kernel: ACPI: MCFG table detected, 1 entries May 15 09:18:00.933705 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 09:18:00.933714 kernel: printk: console [ttyAMA0] enabled May 15 09:18:00.933721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 09:18:00.933848 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 09:18:00.933925 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 09:18:00.933999 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 09:18:00.934064 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 09:18:00.934157 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 09:18:00.934172 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 09:18:00.934180 kernel: PCI host bridge to bus 0000:00 May 15 09:18:00.934258 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 09:18:00.934320 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 09:18:00.934385 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 09:18:00.934446 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 09:18:00.934526 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 09:18:00.934612 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 09:18:00.934680 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 09:18:00.934746 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 09:18:00.934811 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 09:18:00.934877 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 09:18:00.934943 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 09:18:00.935010 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 09:18:00.935071 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 09:18:00.935130 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 09:18:00.935208 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 09:18:00.935218 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 09:18:00.935226 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 09:18:00.935233 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 09:18:00.935241 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 09:18:00.935248 kernel: iommu: Default domain type: Translated May 15 09:18:00.935258 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 09:18:00.935266 kernel: efivars: Registered efivars operations May 15 09:18:00.935273 kernel: vgaarb: loaded May 15 09:18:00.935280 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 09:18:00.935288 kernel: VFS: Disk quotas dquot_6.6.0 May 15 09:18:00.935295 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 09:18:00.935303 kernel: pnp: PnP ACPI init May 15 09:18:00.935375 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 09:18:00.935387 kernel: pnp: PnP ACPI: found 1 devices May 15 09:18:00.935395 kernel: NET: Registered PF_INET protocol family May 15 09:18:00.935403 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 09:18:00.935410 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 09:18:00.935417 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 09:18:00.935425 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 09:18:00.935432 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 09:18:00.935440 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 09:18:00.935447 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 09:18:00.935456 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 09:18:00.935463 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 09:18:00.935471 kernel: PCI: CLS 0 bytes, default 64 May 15 09:18:00.935478 kernel: kvm [1]: HYP mode not available May 15 09:18:00.935485 kernel: Initialise system trusted keyrings May 15 09:18:00.935492 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 09:18:00.935500 kernel: Key type asymmetric registered May 15 09:18:00.935507 kernel: Asymmetric key parser 'x509' registered May 15 09:18:00.935514 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 09:18:00.935523 kernel: io scheduler mq-deadline registered May 15 09:18:00.935530 kernel: io scheduler kyber registered May 15 09:18:00.935537 kernel: io scheduler bfq registered May 15 09:18:00.935545 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 09:18:00.935552 kernel: ACPI: button: Power Button [PWRB] May 15 09:18:00.935560 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 09:18:00.935627 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 09:18:00.935637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 09:18:00.935644 kernel: thunder_xcv, ver 1.0 May 15 09:18:00.935653 kernel: thunder_bgx, ver 1.0 May 15 09:18:00.935661 kernel: nicpf, ver 1.0 May 15 09:18:00.935668 kernel: nicvf, ver 1.0 May 15 09:18:00.935741 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 09:18:00.935804 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T09:18:00 UTC (1747300680) May 15 09:18:00.935814 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 09:18:00.935821 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 09:18:00.935829 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 09:18:00.935838 kernel: watchdog: Hard watchdog permanently disabled May 15 09:18:00.935845 kernel: NET: Registered PF_INET6 protocol family May 15 09:18:00.935852 kernel: Segment Routing with IPv6 May 15 09:18:00.935860 kernel: In-situ OAM (IOAM) with IPv6 May 15 09:18:00.935867 kernel: NET: Registered PF_PACKET protocol family May 15 09:18:00.935874 kernel: Key type dns_resolver registered May 15 09:18:00.935881 kernel: registered taskstats version 1 May 15 09:18:00.935888 kernel: Loading compiled-in X.509 certificates May 15 09:18:00.935896 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 92c83259b69f308571254e31c325f6266f61f369' May 15 09:18:00.935904 kernel: Key type .fscrypt registered May 15 09:18:00.935911 kernel: Key type fscrypt-provisioning registered May 15 09:18:00.935919 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 09:18:00.935926 kernel: ima: Allocated hash algorithm: sha1 May 15 09:18:00.935933 kernel: ima: No architecture policies found May 15 09:18:00.935941 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 09:18:00.935948 kernel: clk: Disabling unused clocks May 15 09:18:00.935955 kernel: Freeing unused kernel memory: 39744K May 15 09:18:00.935962 kernel: Run /init as init process May 15 09:18:00.935971 kernel: with arguments: May 15 09:18:00.935978 kernel: /init May 15 09:18:00.935985 kernel: with environment: May 15 09:18:00.935991 kernel: HOME=/ May 15 09:18:00.935999 kernel: TERM=linux May 15 09:18:00.936006 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 09:18:00.936015 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 09:18:00.936024 systemd[1]: Detected virtualization kvm. May 15 09:18:00.936033 systemd[1]: Detected architecture arm64. May 15 09:18:00.936041 systemd[1]: Running in initrd. May 15 09:18:00.936048 systemd[1]: No hostname configured, using default hostname. May 15 09:18:00.936056 systemd[1]: Hostname set to . May 15 09:18:00.936064 systemd[1]: Initializing machine ID from VM UUID. May 15 09:18:00.936071 systemd[1]: Queued start job for default target initrd.target. May 15 09:18:00.936079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:18:00.936087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:18:00.936097 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 09:18:00.936105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 09:18:00.936112 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 09:18:00.936121 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 09:18:00.936130 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 09:18:00.936152 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 09:18:00.936162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:18:00.936170 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 09:18:00.936179 systemd[1]: Reached target paths.target - Path Units. May 15 09:18:00.936186 systemd[1]: Reached target slices.target - Slice Units. May 15 09:18:00.936194 systemd[1]: Reached target swap.target - Swaps. May 15 09:18:00.936204 systemd[1]: Reached target timers.target - Timer Units. May 15 09:18:00.936215 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 09:18:00.936224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 09:18:00.936232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 09:18:00.936241 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 09:18:00.936249 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 09:18:00.936257 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 09:18:00.936265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:18:00.936273 systemd[1]: Reached target sockets.target - Socket Units. May 15 09:18:00.936280 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 09:18:00.936288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 09:18:00.936296 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 09:18:00.936304 systemd[1]: Starting systemd-fsck-usr.service... May 15 09:18:00.936313 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 09:18:00.936321 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 09:18:00.936328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:18:00.936336 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 09:18:00.936344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:18:00.936352 systemd[1]: Finished systemd-fsck-usr.service. May 15 09:18:00.936377 systemd-journald[237]: Collecting audit messages is disabled. May 15 09:18:00.936396 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 09:18:00.936406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:18:00.936414 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 09:18:00.936422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:18:00.936430 systemd-journald[237]: Journal started May 15 09:18:00.936449 systemd-journald[237]: Runtime Journal (/run/log/journal/938457e2a8fd4c32a337599804c1f407) is 5.9M, max 47.3M, 41.4M free. May 15 09:18:00.924266 systemd-modules-load[238]: Inserted module 'overlay' May 15 09:18:00.938813 systemd-modules-load[238]: Inserted module 'br_netfilter' May 15 09:18:00.939794 kernel: Bridge firewalling registered May 15 09:18:00.942821 systemd[1]: Started systemd-journald.service - Journal Service. May 15 09:18:00.943239 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 09:18:00.944538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 09:18:00.960340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:18:00.962011 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 09:18:00.964271 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 09:18:00.966408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:18:00.970981 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 09:18:00.974180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:18:00.976061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:18:00.985268 dracut-cmdline[270]: dracut-dracut-053 May 15 09:18:00.984332 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 09:18:00.989389 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d0dcc1a3c20c0187ebc71aef3b6915c891fced8fde4a46120a0dd669765b171b May 15 09:18:00.985702 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:18:01.015276 systemd-resolved[278]: Positive Trust Anchors: May 15 09:18:01.015349 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 09:18:01.015380 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 09:18:01.020066 systemd-resolved[278]: Defaulting to hostname 'linux'. May 15 09:18:01.023850 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 09:18:01.025017 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 09:18:01.055172 kernel: SCSI subsystem initialized May 15 09:18:01.060160 kernel: Loading iSCSI transport class v2.0-870. May 15 09:18:01.067176 kernel: iscsi: registered transport (tcp) May 15 09:18:01.080295 kernel: iscsi: registered transport (qla4xxx) May 15 09:18:01.080338 kernel: QLogic iSCSI HBA Driver May 15 09:18:01.123058 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 09:18:01.139292 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 09:18:01.156540 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 09:18:01.156598 kernel: device-mapper: uevent: version 1.0.3 May 15 09:18:01.157697 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 09:18:01.203183 kernel: raid6: neonx8 gen() 15695 MB/s May 15 09:18:01.220174 kernel: raid6: neonx4 gen() 15546 MB/s May 15 09:18:01.237166 kernel: raid6: neonx2 gen() 13166 MB/s May 15 09:18:01.254167 kernel: raid6: neonx1 gen() 10419 MB/s May 15 09:18:01.271166 kernel: raid6: int64x8 gen() 6918 MB/s May 15 09:18:01.288163 kernel: raid6: int64x4 gen() 7294 MB/s May 15 09:18:01.305167 kernel: raid6: int64x2 gen() 6073 MB/s May 15 09:18:01.322316 kernel: raid6: int64x1 gen() 4908 MB/s May 15 09:18:01.322334 kernel: raid6: using algorithm neonx8 gen() 15695 MB/s May 15 09:18:01.340809 kernel: raid6: .... xor() 11496 MB/s, rmw enabled May 15 09:18:01.340823 kernel: raid6: using neon recovery algorithm May 15 09:18:01.346171 kernel: xor: measuring software checksum speed May 15 09:18:01.347509 kernel: 8regs : 15767 MB/sec May 15 09:18:01.347522 kernel: 32regs : 17425 MB/sec May 15 09:18:01.348170 kernel: arm64_neon : 26919 MB/sec May 15 09:18:01.348186 kernel: xor: using function: arm64_neon (26919 MB/sec) May 15 09:18:01.400170 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 09:18:01.410889 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 09:18:01.425343 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:18:01.436900 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 15 09:18:01.440000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:18:01.443664 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 09:18:01.462076 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation May 15 09:18:01.492190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 09:18:01.506283 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 09:18:01.546099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:18:01.558283 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 09:18:01.570612 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 09:18:01.572202 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 09:18:01.574320 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:18:01.576361 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 09:18:01.585296 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 09:18:01.592176 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 09:18:01.592381 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 09:18:01.593964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 09:18:01.603805 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 09:18:01.603842 kernel: GPT:9289727 != 19775487 May 15 09:18:01.603852 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 09:18:01.603869 kernel: GPT:9289727 != 19775487 May 15 09:18:01.604260 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 09:18:01.605466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:18:01.612687 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 09:18:01.612813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:18:01.617175 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:18:01.618449 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 09:18:01.618582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:18:01.621242 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:18:01.629453 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:18:01.638221 kernel: BTRFS: device fsid 7f05ae4e-a0c8-4dcf-a71f-4c5b9e94e6f4 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (516) May 15 09:18:01.643206 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (515) May 15 09:18:01.640789 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 09:18:01.645004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:18:01.649949 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 09:18:01.656530 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 09:18:01.657750 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 09:18:01.663289 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 09:18:01.683295 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 09:18:01.685193 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 09:18:01.692268 disk-uuid[559]: Primary Header is updated. May 15 09:18:01.692268 disk-uuid[559]: Secondary Entries is updated. May 15 09:18:01.692268 disk-uuid[559]: Secondary Header is updated. May 15 09:18:01.698170 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:18:01.709618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:18:02.716185 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 09:18:02.717078 disk-uuid[560]: The operation has completed successfully. May 15 09:18:02.744636 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 09:18:02.744737 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 09:18:02.763308 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 09:18:02.769148 sh[581]: Success May 15 09:18:02.784186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 09:18:02.817233 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 09:18:02.828104 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 09:18:02.830660 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 09:18:02.846074 kernel: BTRFS info (device dm-0): first mount of filesystem 7f05ae4e-a0c8-4dcf-a71f-4c5b9e94e6f4 May 15 09:18:02.846117 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 09:18:02.846132 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 09:18:02.848200 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 09:18:02.848235 kernel: BTRFS info (device dm-0): using free space tree May 15 09:18:02.854895 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 09:18:02.856023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 09:18:02.873325 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 09:18:02.875120 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 09:18:02.884809 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:18:02.884851 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:18:02.884862 kernel: BTRFS info (device vda6): using free space tree May 15 09:18:02.889917 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:18:02.896998 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 09:18:02.899154 kernel: BTRFS info (device vda6): last unmount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:18:02.903956 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 09:18:02.913351 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 09:18:02.991434 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 09:18:03.008346 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 09:18:03.056437 systemd-networkd[769]: lo: Link UP May 15 09:18:03.056452 systemd-networkd[769]: lo: Gained carrier May 15 09:18:03.057257 systemd-networkd[769]: Enumeration completed May 15 09:18:03.057895 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:18:03.057898 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:18:03.059532 systemd-networkd[769]: eth0: Link UP May 15 09:18:03.059537 systemd-networkd[769]: eth0: Gained carrier May 15 09:18:03.059545 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:18:03.059557 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 09:18:03.060856 systemd[1]: Reached target network.target - Network. May 15 09:18:03.084426 ignition[677]: Ignition 2.20.0 May 15 09:18:03.084436 ignition[677]: Stage: fetch-offline May 15 09:18:03.084484 ignition[677]: no configs at "/usr/lib/ignition/base.d" May 15 09:18:03.084492 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:18:03.084686 ignition[677]: parsed url from cmdline: "" May 15 09:18:03.084690 ignition[677]: no config URL provided May 15 09:18:03.084694 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" May 15 09:18:03.084702 ignition[677]: no config at "/usr/lib/ignition/user.ign" May 15 09:18:03.090223 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 09:18:03.084731 ignition[677]: op(1): [started] loading QEMU firmware config module May 15 09:18:03.084736 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 09:18:03.095003 ignition[677]: op(1): [finished] loading QEMU firmware config module May 15 09:18:03.133105 ignition[677]: parsing config with SHA512: d35c8da501270e8c706d5297538fa5dfe7f08a25df6797c0987c332f506b69c758456b5c638b95ea3373eddd866e49a3386630e677dec4416ef95cb9069dc4cc May 15 09:18:03.138425 unknown[677]: fetched base config from "system" May 15 09:18:03.138435 unknown[677]: fetched user config from "qemu" May 15 09:18:03.138840 ignition[677]: fetch-offline: fetch-offline passed May 15 09:18:03.138911 ignition[677]: Ignition finished successfully May 15 09:18:03.142233 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 09:18:03.143826 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 09:18:03.154314 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 09:18:03.165073 ignition[780]: Ignition 2.20.0 May 15 09:18:03.165084 ignition[780]: Stage: kargs May 15 09:18:03.165337 ignition[780]: no configs at "/usr/lib/ignition/base.d" May 15 09:18:03.165357 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:18:03.167953 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 09:18:03.166280 ignition[780]: kargs: kargs passed May 15 09:18:03.166326 ignition[780]: Ignition finished successfully May 15 09:18:03.182323 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 09:18:03.192792 ignition[788]: Ignition 2.20.0 May 15 09:18:03.192802 ignition[788]: Stage: disks May 15 09:18:03.192962 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 15 09:18:03.192972 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:18:03.193957 ignition[788]: disks: disks passed May 15 09:18:03.194004 ignition[788]: Ignition finished successfully May 15 09:18:03.197186 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 09:18:03.198821 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 09:18:03.200525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 09:18:03.202598 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 09:18:03.204699 systemd[1]: Reached target sysinit.target - System Initialization. May 15 09:18:03.206524 systemd[1]: Reached target basic.target - Basic System. May 15 09:18:03.228335 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 09:18:03.238413 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.7 May 15 09:18:03.238424 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. May 15 09:18:03.242132 systemd-fsck[799]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 09:18:03.244851 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 09:18:03.247717 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 09:18:03.293166 kernel: EXT4-fs (vda9): mounted filesystem e3ca107a-d829-49e7-81f2-462a85be67d1 r/w with ordered data mode. Quota mode: none. May 15 09:18:03.293838 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 09:18:03.295164 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 09:18:03.315277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 09:18:03.317095 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 09:18:03.318498 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 09:18:03.318539 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 09:18:03.325841 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (807) May 15 09:18:03.325865 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:18:03.318561 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 09:18:03.331492 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:18:03.331512 kernel: BTRFS info (device vda6): using free space tree May 15 09:18:03.331522 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:18:03.323709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 09:18:03.342321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 09:18:03.344291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 09:18:03.379224 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory May 15 09:18:03.383422 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory May 15 09:18:03.386453 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory May 15 09:18:03.389265 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory May 15 09:18:03.461077 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 09:18:03.475268 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 09:18:03.476990 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 09:18:03.484159 kernel: BTRFS info (device vda6): last unmount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:18:03.498366 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 09:18:03.501947 ignition[922]: INFO : Ignition 2.20.0 May 15 09:18:03.501947 ignition[922]: INFO : Stage: mount May 15 09:18:03.503602 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:18:03.503602 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:18:03.503602 ignition[922]: INFO : mount: mount passed May 15 09:18:03.503602 ignition[922]: INFO : Ignition finished successfully May 15 09:18:03.506207 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 09:18:03.519315 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 09:18:03.844671 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 09:18:03.859327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 09:18:03.865626 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (934) May 15 09:18:03.865657 kernel: BTRFS info (device vda6): first mount of filesystem dd768540-f927-459a-82ec-deed8f3baa7c May 15 09:18:03.866596 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 09:18:03.866608 kernel: BTRFS info (device vda6): using free space tree May 15 09:18:03.869157 kernel: BTRFS info (device vda6): auto enabling async discard May 15 09:18:03.870514 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 09:18:03.886659 ignition[951]: INFO : Ignition 2.20.0 May 15 09:18:03.886659 ignition[951]: INFO : Stage: files May 15 09:18:03.888297 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:18:03.888297 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:18:03.888297 ignition[951]: DEBUG : files: compiled without relabeling support, skipping May 15 09:18:03.891906 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 09:18:03.891906 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 09:18:03.891906 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 09:18:03.891906 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 09:18:03.891906 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 09:18:03.891076 unknown[951]: wrote ssh authorized keys file for user: core May 15 09:18:03.899394 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 09:18:03.899394 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 09:18:03.928690 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 15 09:18:04.038321 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 09:18:04.038321 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 09:18:04.042372 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 09:18:04.336906 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 09:18:04.660357 systemd-networkd[769]: eth0: Gained IPv6LL May 15 09:18:05.125243 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 09:18:05.125243 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 09:18:05.129307 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 15 09:18:05.426069 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 09:18:06.047814 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 15 09:18:06.047814 ignition[951]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 15 09:18:06.051523 ignition[951]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 15 09:18:06.087602 ignition[951]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 09:18:06.091677 ignition[951]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 09:18:06.094211 ignition[951]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 15 09:18:06.094211 ignition[951]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 09:18:06.094211 ignition[951]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 09:18:06.094211 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 09:18:06.094211 ignition[951]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 09:18:06.094211 ignition[951]: INFO : files: files passed May 15 09:18:06.094211 ignition[951]: INFO : Ignition finished successfully May 15 09:18:06.096026 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 09:18:06.109339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 09:18:06.111985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 09:18:06.113528 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 09:18:06.113608 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 09:18:06.120850 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory May 15 09:18:06.124211 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 09:18:06.124211 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 09:18:06.128729 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 09:18:06.130503 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 09:18:06.132121 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 09:18:06.146375 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 09:18:06.167517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 09:18:06.167644 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 09:18:06.169851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 09:18:06.172236 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 09:18:06.176191 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 09:18:06.186433 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 09:18:06.201377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 09:18:06.211398 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 09:18:06.219242 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 09:18:06.220871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:18:06.223576 systemd[1]: Stopped target timers.target - Timer Units. May 15 09:18:06.225531 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 09:18:06.225756 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 09:18:06.230391 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 09:18:06.231916 systemd[1]: Stopped target basic.target - Basic System. May 15 09:18:06.234577 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 09:18:06.236210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 09:18:06.238664 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 09:18:06.240745 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 09:18:06.244620 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 09:18:06.246584 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 09:18:06.248567 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 09:18:06.250280 systemd[1]: Stopped target swap.target - Swaps. May 15 09:18:06.251853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 09:18:06.251988 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 09:18:06.254311 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 09:18:06.256297 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:18:06.258219 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 09:18:06.259199 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:18:06.260532 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 09:18:06.260662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 09:18:06.263552 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 09:18:06.263676 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 09:18:06.265709 systemd[1]: Stopped target paths.target - Path Units. May 15 09:18:06.267388 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 09:18:06.269194 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:18:06.270625 systemd[1]: Stopped target slices.target - Slice Units. May 15 09:18:06.272215 systemd[1]: Stopped target sockets.target - Socket Units. May 15 09:18:06.274081 systemd[1]: iscsid.socket: Deactivated successfully. May 15 09:18:06.274198 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 09:18:06.276301 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 09:18:06.276382 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 09:18:06.278055 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 09:18:06.278193 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 09:18:06.279985 systemd[1]: ignition-files.service: Deactivated successfully. May 15 09:18:06.280085 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 09:18:06.299453 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 09:18:06.300407 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 09:18:06.300561 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:18:06.306264 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 09:18:06.308211 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 09:18:06.308347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:18:06.313333 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 09:18:06.313437 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 09:18:06.322565 ignition[1008]: INFO : Ignition 2.20.0 May 15 09:18:06.322565 ignition[1008]: INFO : Stage: umount May 15 09:18:06.324370 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 09:18:06.324370 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 09:18:06.324370 ignition[1008]: INFO : umount: umount passed May 15 09:18:06.324370 ignition[1008]: INFO : Ignition finished successfully May 15 09:18:06.325263 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 09:18:06.326293 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 09:18:06.326383 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 09:18:06.330605 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 09:18:06.330716 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 09:18:06.333616 systemd[1]: Stopped target network.target - Network. May 15 09:18:06.335593 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 09:18:06.335662 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 09:18:06.337622 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 09:18:06.337671 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 09:18:06.339710 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 09:18:06.339754 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 09:18:06.341525 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 09:18:06.341576 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 09:18:06.344674 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 09:18:06.346331 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 09:18:06.349162 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 09:18:06.349275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 09:18:06.352470 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 09:18:06.352526 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:18:06.353227 systemd-networkd[769]: eth0: DHCPv6 lease lost May 15 09:18:06.355251 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 09:18:06.355358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 09:18:06.357492 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 09:18:06.357525 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 09:18:06.364447 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 09:18:06.365354 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 09:18:06.365418 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 09:18:06.367533 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 09:18:06.367578 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 09:18:06.369707 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 09:18:06.369753 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 09:18:06.371471 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:18:06.381730 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 09:18:06.381830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 09:18:06.386771 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 09:18:06.386907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:18:06.389110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 09:18:06.389180 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 09:18:06.391100 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 09:18:06.391158 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:18:06.392916 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 09:18:06.392967 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 09:18:06.395624 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 09:18:06.395670 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 09:18:06.400213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 09:18:06.400280 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 09:18:06.412287 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 09:18:06.413338 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 09:18:06.413394 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:18:06.415615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 09:18:06.415660 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:18:06.417869 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 09:18:06.417968 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 09:18:06.419949 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 09:18:06.420038 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 09:18:06.421781 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 09:18:06.421870 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 09:18:06.424087 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 09:18:06.427949 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 09:18:06.437410 systemd[1]: Switching root. May 15 09:18:06.466342 systemd-journald[237]: Journal stopped May 15 09:18:07.179506 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 15 09:18:07.179565 kernel: SELinux: policy capability network_peer_controls=1 May 15 09:18:07.179578 kernel: SELinux: policy capability open_perms=1 May 15 09:18:07.179587 kernel: SELinux: policy capability extended_socket_class=1 May 15 09:18:07.179601 kernel: SELinux: policy capability always_check_network=0 May 15 09:18:07.179614 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 09:18:07.179623 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 09:18:07.179635 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 09:18:07.179645 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 09:18:07.179654 kernel: audit: type=1403 audit(1747300686.623:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 09:18:07.179670 systemd[1]: Successfully loaded SELinux policy in 32.497ms. May 15 09:18:07.179687 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.681ms. May 15 09:18:07.179698 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 09:18:07.179709 systemd[1]: Detected virtualization kvm. May 15 09:18:07.179720 systemd[1]: Detected architecture arm64. May 15 09:18:07.179730 systemd[1]: Detected first boot. May 15 09:18:07.179741 systemd[1]: Initializing machine ID from VM UUID. May 15 09:18:07.179752 zram_generator::config[1053]: No configuration found. May 15 09:18:07.179763 systemd[1]: Populated /etc with preset unit settings. May 15 09:18:07.179773 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 09:18:07.179783 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 15 09:18:07.179793 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 09:18:07.179804 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 09:18:07.179814 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 09:18:07.179826 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 09:18:07.179837 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 09:18:07.179847 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 09:18:07.179857 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 09:18:07.179867 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 09:18:07.179878 systemd[1]: Created slice user.slice - User and Session Slice. May 15 09:18:07.179888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 09:18:07.179900 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 09:18:07.179910 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 09:18:07.179922 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 09:18:07.179933 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 09:18:07.179943 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 09:18:07.179954 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 09:18:07.179965 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 09:18:07.179975 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 15 09:18:07.179986 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 15 09:18:07.179996 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 15 09:18:07.180008 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 09:18:07.180018 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 09:18:07.180029 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 09:18:07.180039 systemd[1]: Reached target slices.target - Slice Units. May 15 09:18:07.180049 systemd[1]: Reached target swap.target - Swaps. May 15 09:18:07.180059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 09:18:07.180070 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 09:18:07.180080 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 09:18:07.180091 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 09:18:07.180102 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 09:18:07.180113 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 09:18:07.180131 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 09:18:07.180162 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 09:18:07.180174 systemd[1]: Mounting media.mount - External Media Directory... May 15 09:18:07.180185 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 09:18:07.180195 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 09:18:07.180205 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 09:18:07.180216 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 09:18:07.180229 systemd[1]: Reached target machines.target - Containers. May 15 09:18:07.180240 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 09:18:07.180250 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:18:07.180260 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 09:18:07.180271 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 09:18:07.180282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:18:07.180292 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 09:18:07.180302 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:18:07.180315 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 09:18:07.180325 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:18:07.180335 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 09:18:07.180346 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 09:18:07.180356 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 15 09:18:07.180366 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 09:18:07.180376 kernel: fuse: init (API version 7.39) May 15 09:18:07.180386 systemd[1]: Stopped systemd-fsck-usr.service. May 15 09:18:07.180397 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 09:18:07.180407 kernel: ACPI: bus type drm_connector registered May 15 09:18:07.180417 kernel: loop: module loaded May 15 09:18:07.180427 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 09:18:07.180438 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 09:18:07.180448 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 09:18:07.180458 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 09:18:07.180470 systemd[1]: verity-setup.service: Deactivated successfully. May 15 09:18:07.180497 systemd-journald[1120]: Collecting audit messages is disabled. May 15 09:18:07.180521 systemd[1]: Stopped verity-setup.service. May 15 09:18:07.180532 systemd-journald[1120]: Journal started May 15 09:18:07.180553 systemd-journald[1120]: Runtime Journal (/run/log/journal/938457e2a8fd4c32a337599804c1f407) is 5.9M, max 47.3M, 41.4M free. May 15 09:18:06.982291 systemd[1]: Queued start job for default target multi-user.target. May 15 09:18:06.996179 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 09:18:06.996551 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 09:18:07.183166 systemd[1]: Started systemd-journald.service - Journal Service. May 15 09:18:07.183737 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 09:18:07.184932 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 09:18:07.186220 systemd[1]: Mounted media.mount - External Media Directory. May 15 09:18:07.187410 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 09:18:07.188681 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 09:18:07.189889 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 09:18:07.191194 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 09:18:07.192679 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 09:18:07.195476 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 09:18:07.195644 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 09:18:07.197080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:18:07.197434 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:18:07.198791 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 09:18:07.198950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 09:18:07.200473 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:18:07.200620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:18:07.202116 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 09:18:07.202310 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 09:18:07.203744 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:18:07.203882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:18:07.205426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 09:18:07.206806 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 09:18:07.210170 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 09:18:07.222845 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 09:18:07.237060 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 09:18:07.239444 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 09:18:07.240624 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 09:18:07.240668 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 09:18:07.242696 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 09:18:07.245038 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 09:18:07.247181 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 09:18:07.248260 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:18:07.249781 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 09:18:07.253474 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 09:18:07.254694 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 09:18:07.256308 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 09:18:07.257737 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 09:18:07.259353 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:18:07.263246 systemd-journald[1120]: Time spent on flushing to /var/log/journal/938457e2a8fd4c32a337599804c1f407 is 15.409ms for 860 entries. May 15 09:18:07.263246 systemd-journald[1120]: System Journal (/var/log/journal/938457e2a8fd4c32a337599804c1f407) is 8.0M, max 195.6M, 187.6M free. May 15 09:18:07.289678 systemd-journald[1120]: Received client request to flush runtime journal. May 15 09:18:07.266340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 09:18:07.280355 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 09:18:07.285787 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 09:18:07.287470 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 09:18:07.288860 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 09:18:07.292196 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 09:18:07.293993 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 09:18:07.296437 kernel: loop0: detected capacity change from 0 to 113536 May 15 09:18:07.296431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:18:07.303222 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 09:18:07.305847 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 09:18:07.313329 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 09:18:07.314543 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 09:18:07.322154 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 09:18:07.323829 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 09:18:07.327395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 09:18:07.333632 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 09:18:07.339604 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 09:18:07.341159 kernel: loop1: detected capacity change from 0 to 194096 May 15 09:18:07.343491 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 09:18:07.367190 kernel: loop2: detected capacity change from 0 to 116808 May 15 09:18:07.364291 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 15 09:18:07.364314 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. May 15 09:18:07.369366 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 09:18:07.396194 kernel: loop3: detected capacity change from 0 to 113536 May 15 09:18:07.401191 kernel: loop4: detected capacity change from 0 to 194096 May 15 09:18:07.408158 kernel: loop5: detected capacity change from 0 to 116808 May 15 09:18:07.410973 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 09:18:07.411406 (sd-merge)[1188]: Merged extensions into '/usr'. May 15 09:18:07.415610 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... May 15 09:18:07.415629 systemd[1]: Reloading... May 15 09:18:07.467504 zram_generator::config[1211]: No configuration found. May 15 09:18:07.536769 ldconfig[1159]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 09:18:07.561836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:18:07.597367 systemd[1]: Reloading finished in 181 ms. May 15 09:18:07.629214 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 09:18:07.630648 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 09:18:07.639310 systemd[1]: Starting ensure-sysext.service... May 15 09:18:07.641594 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 09:18:07.653703 systemd[1]: Reloading requested from client PID 1248 ('systemctl') (unit ensure-sysext.service)... May 15 09:18:07.653717 systemd[1]: Reloading... May 15 09:18:07.664432 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 09:18:07.664700 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 09:18:07.665345 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 09:18:07.665556 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 15 09:18:07.665601 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. May 15 09:18:07.667580 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 15 09:18:07.667592 systemd-tmpfiles[1249]: Skipping /boot May 15 09:18:07.674617 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. May 15 09:18:07.674633 systemd-tmpfiles[1249]: Skipping /boot May 15 09:18:07.703420 zram_generator::config[1275]: No configuration found. May 15 09:18:07.783276 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:18:07.817814 systemd[1]: Reloading finished in 163 ms. May 15 09:18:07.829947 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 09:18:07.841768 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 09:18:07.847770 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:18:07.850343 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 09:18:07.852792 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 09:18:07.858279 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 09:18:07.863812 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 09:18:07.867185 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 09:18:07.874042 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 09:18:07.877517 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:18:07.881515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:18:07.888104 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:18:07.891520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:18:07.892696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:18:07.893436 systemd-udevd[1322]: Using default interface naming scheme 'v255'. May 15 09:18:07.893445 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 09:18:07.896867 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:18:07.898189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:18:07.899828 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:18:07.899945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:18:07.901638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:18:07.901759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:18:07.909526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:18:07.919494 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:18:07.923519 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:18:07.925856 augenrules[1347]: No rules May 15 09:18:07.926026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:18:07.929339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:18:07.930545 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 09:18:07.932204 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 09:18:07.933829 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 09:18:07.937880 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:18:07.938068 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:18:07.939941 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 09:18:07.942253 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 09:18:07.944910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:18:07.947162 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:18:07.949706 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:18:07.949830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:18:07.953063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:18:07.953205 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:18:07.971170 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1361) May 15 09:18:07.976487 systemd[1]: Finished ensure-sysext.service. May 15 09:18:07.982108 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 09:18:08.003620 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 15 09:18:08.022412 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:18:08.023953 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 09:18:08.025452 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 09:18:08.029157 systemd-resolved[1316]: Positive Trust Anchors: May 15 09:18:08.029472 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 09:18:08.029507 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 09:18:08.030465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 09:18:08.033031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 09:18:08.036014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 09:18:08.038584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 09:18:08.042323 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 09:18:08.044274 systemd-resolved[1316]: Defaulting to hostname 'linux'. May 15 09:18:08.045901 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 09:18:08.046399 augenrules[1391]: /sbin/augenrules: No change May 15 09:18:08.052904 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 09:18:08.053262 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 09:18:08.057523 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 09:18:08.057692 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 09:18:08.059398 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 09:18:08.059522 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 09:18:08.060966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 09:18:08.061086 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 09:18:08.062703 augenrules[1417]: No rules May 15 09:18:08.062797 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 09:18:08.062923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 09:18:08.065889 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:18:08.066046 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:18:08.068598 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 09:18:08.094331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 09:18:08.096663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 09:18:08.097807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 09:18:08.097863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 09:18:08.101361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 09:18:08.107217 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 09:18:08.118469 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 09:18:08.123180 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 09:18:08.144608 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 09:18:08.145541 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 09:18:08.146998 systemd[1]: Reached target time-set.target - System Time Set. May 15 09:18:08.164037 systemd-networkd[1404]: lo: Link UP May 15 09:18:08.164047 systemd-networkd[1404]: lo: Gained carrier May 15 09:18:08.165282 systemd-networkd[1404]: Enumeration completed May 15 09:18:08.165543 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 09:18:08.166004 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:18:08.166008 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 09:18:08.167110 systemd-networkd[1404]: eth0: Link UP May 15 09:18:08.167128 systemd-networkd[1404]: eth0: Gained carrier May 15 09:18:08.167183 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 09:18:08.167362 systemd[1]: Reached target network.target - Network. May 15 09:18:08.188326 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 09:18:08.192159 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 09:18:08.193733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 09:18:08.195002 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 09:18:08.195614 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 09:18:08.195858 systemd-timesyncd[1408]: Network configuration changed, trying to establish connection. May 15 09:18:08.196831 systemd[1]: Reached target sysinit.target - System Initialization. May 15 09:18:08.198023 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 09:18:08.198819 systemd-timesyncd[1408]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 09:18:08.198870 systemd-timesyncd[1408]: Initial clock synchronization to Thu 2025-05-15 09:18:08.289106 UTC. May 15 09:18:08.199368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 09:18:08.200817 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 09:18:08.201952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 09:18:08.203202 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 09:18:08.204407 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 09:18:08.204441 systemd[1]: Reached target paths.target - Path Units. May 15 09:18:08.205297 systemd[1]: Reached target timers.target - Timer Units. May 15 09:18:08.206974 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 09:18:08.209290 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 09:18:08.217994 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 09:18:08.220085 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 09:18:08.221738 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 09:18:08.222926 systemd[1]: Reached target sockets.target - Socket Units. May 15 09:18:08.223849 systemd[1]: Reached target basic.target - Basic System. May 15 09:18:08.224818 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 09:18:08.224851 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 09:18:08.225703 systemd[1]: Starting containerd.service - containerd container runtime... May 15 09:18:08.227556 lvm[1444]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 09:18:08.227629 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 09:18:08.231293 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 09:18:08.236992 jq[1447]: false May 15 09:18:08.236283 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 09:18:08.237408 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 09:18:08.238406 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 09:18:08.241236 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 09:18:08.244401 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 09:18:08.247319 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 09:18:08.250005 extend-filesystems[1448]: Found loop3 May 15 09:18:08.251457 extend-filesystems[1448]: Found loop4 May 15 09:18:08.251457 extend-filesystems[1448]: Found loop5 May 15 09:18:08.251457 extend-filesystems[1448]: Found vda May 15 09:18:08.251457 extend-filesystems[1448]: Found vda1 May 15 09:18:08.251457 extend-filesystems[1448]: Found vda2 May 15 09:18:08.251457 extend-filesystems[1448]: Found vda3 May 15 09:18:08.251457 extend-filesystems[1448]: Found usr May 15 09:18:08.251457 extend-filesystems[1448]: Found vda4 May 15 09:18:08.251457 extend-filesystems[1448]: Found vda6 May 15 09:18:08.251457 extend-filesystems[1448]: Found vda7 May 15 09:18:08.251457 extend-filesystems[1448]: Found vda9 May 15 09:18:08.251457 extend-filesystems[1448]: Checking size of /dev/vda9 May 15 09:18:08.285371 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1370) May 15 09:18:08.259107 dbus-daemon[1446]: [system] SELinux support is enabled May 15 09:18:08.254290 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 09:18:08.257867 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 09:18:08.258361 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 09:18:08.262318 systemd[1]: Starting update-engine.service - Update Engine... May 15 09:18:08.285978 jq[1464]: true May 15 09:18:08.264864 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 09:18:08.267347 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 09:18:08.272526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 09:18:08.280373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 09:18:08.280522 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 09:18:08.280759 systemd[1]: motdgen.service: Deactivated successfully. May 15 09:18:08.280887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 09:18:08.288602 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 09:18:08.289309 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 09:18:08.294384 extend-filesystems[1448]: Resized partition /dev/vda9 May 15 09:18:08.306162 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 09:18:08.306215 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) May 15 09:18:08.306991 systemd-logind[1458]: Watching system buttons on /dev/input/event0 (Power Button) May 15 09:18:08.309698 systemd-logind[1458]: New seat seat0. May 15 09:18:08.317060 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 09:18:08.317115 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 09:18:08.317167 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 09:18:08.319513 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 09:18:08.319531 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 09:18:08.324398 jq[1472]: true May 15 09:18:08.323374 systemd[1]: Started systemd-logind.service - User Login Management. May 15 09:18:08.327531 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 09:18:08.328034 update_engine[1462]: I20250515 09:18:08.326943 1462 main.cc:92] Flatcar Update Engine starting May 15 09:18:08.333609 update_engine[1462]: I20250515 09:18:08.331205 1462 update_check_scheduler.cc:74] Next update check in 5m15s May 15 09:18:08.343448 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 09:18:08.343448 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 09:18:08.343448 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 09:18:08.338251 systemd[1]: Started update-engine.service - Update Engine. May 15 09:18:08.348742 extend-filesystems[1448]: Resized filesystem in /dev/vda9 May 15 09:18:08.344337 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 09:18:08.348180 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 09:18:08.356328 tar[1471]: linux-arm64/helm May 15 09:18:08.348337 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 09:18:08.395162 bash[1502]: Updated "/home/core/.ssh/authorized_keys" May 15 09:18:08.399190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 09:18:08.401044 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 09:18:08.407444 locksmithd[1486]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 09:18:08.511798 containerd[1474]: time="2025-05-15T09:18:08.511699360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 09:18:08.537585 containerd[1474]: time="2025-05-15T09:18:08.537545920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539048 containerd[1474]: time="2025-05-15T09:18:08.539014040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 09:18:08.539048 containerd[1474]: time="2025-05-15T09:18:08.539047000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 09:18:08.539134 containerd[1474]: time="2025-05-15T09:18:08.539064880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 09:18:08.539259 containerd[1474]: time="2025-05-15T09:18:08.539236960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 09:18:08.539282 containerd[1474]: time="2025-05-15T09:18:08.539262880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539329 containerd[1474]: time="2025-05-15T09:18:08.539314120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:18:08.539351 containerd[1474]: time="2025-05-15T09:18:08.539329960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539498 containerd[1474]: time="2025-05-15T09:18:08.539480000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:18:08.539517 containerd[1474]: time="2025-05-15T09:18:08.539498840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539533 containerd[1474]: time="2025-05-15T09:18:08.539512840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:18:08.539533 containerd[1474]: time="2025-05-15T09:18:08.539521960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539604 containerd[1474]: time="2025-05-15T09:18:08.539590440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539801 containerd[1474]: time="2025-05-15T09:18:08.539783360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 09:18:08.539907 containerd[1474]: time="2025-05-15T09:18:08.539892520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 09:18:08.539932 containerd[1474]: time="2025-05-15T09:18:08.539908920Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 09:18:08.540004 containerd[1474]: time="2025-05-15T09:18:08.539989600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 09:18:08.540048 containerd[1474]: time="2025-05-15T09:18:08.540036920Z" level=info msg="metadata content store policy set" policy=shared May 15 09:18:08.548397 containerd[1474]: time="2025-05-15T09:18:08.548368280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 09:18:08.548458 containerd[1474]: time="2025-05-15T09:18:08.548418120Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 09:18:08.548458 containerd[1474]: time="2025-05-15T09:18:08.548434640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 09:18:08.548458 containerd[1474]: time="2025-05-15T09:18:08.548450440Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 09:18:08.548526 containerd[1474]: time="2025-05-15T09:18:08.548463400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 09:18:08.548611 containerd[1474]: time="2025-05-15T09:18:08.548588360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 09:18:08.548822 containerd[1474]: time="2025-05-15T09:18:08.548805200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 09:18:08.548920 containerd[1474]: time="2025-05-15T09:18:08.548904160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 09:18:08.548942 containerd[1474]: time="2025-05-15T09:18:08.548925880Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 09:18:08.548966 containerd[1474]: time="2025-05-15T09:18:08.548940120Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 09:18:08.548966 containerd[1474]: time="2025-05-15T09:18:08.548953400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 09:18:08.548999 containerd[1474]: time="2025-05-15T09:18:08.548965320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 09:18:08.548999 containerd[1474]: time="2025-05-15T09:18:08.548978320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 09:18:08.548999 containerd[1474]: time="2025-05-15T09:18:08.548991520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 09:18:08.549045 containerd[1474]: time="2025-05-15T09:18:08.549005080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 09:18:08.549045 containerd[1474]: time="2025-05-15T09:18:08.549018360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 09:18:08.549045 containerd[1474]: time="2025-05-15T09:18:08.549030080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 09:18:08.549045 containerd[1474]: time="2025-05-15T09:18:08.549041440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 09:18:08.549112 containerd[1474]: time="2025-05-15T09:18:08.549060960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549112 containerd[1474]: time="2025-05-15T09:18:08.549075200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549112 containerd[1474]: time="2025-05-15T09:18:08.549087480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549112 containerd[1474]: time="2025-05-15T09:18:08.549099280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549112 containerd[1474]: time="2025-05-15T09:18:08.549110320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549132640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549162960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549175760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549188840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549203160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549214560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549228 containerd[1474]: time="2025-05-15T09:18:08.549226960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549340 containerd[1474]: time="2025-05-15T09:18:08.549238880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549340 containerd[1474]: time="2025-05-15T09:18:08.549258480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 09:18:08.549340 containerd[1474]: time="2025-05-15T09:18:08.549277680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549340 containerd[1474]: time="2025-05-15T09:18:08.549290480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549340 containerd[1474]: time="2025-05-15T09:18:08.549301840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 09:18:08.549485 containerd[1474]: time="2025-05-15T09:18:08.549472080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 09:18:08.549509 containerd[1474]: time="2025-05-15T09:18:08.549490400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 09:18:08.549509 containerd[1474]: time="2025-05-15T09:18:08.549501240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 09:18:08.549551 containerd[1474]: time="2025-05-15T09:18:08.549515800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 09:18:08.549551 containerd[1474]: time="2025-05-15T09:18:08.549525520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549551 containerd[1474]: time="2025-05-15T09:18:08.549537600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 09:18:08.549551 containerd[1474]: time="2025-05-15T09:18:08.549547360Z" level=info msg="NRI interface is disabled by configuration." May 15 09:18:08.549615 containerd[1474]: time="2025-05-15T09:18:08.549558560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 09:18:08.549841 containerd[1474]: time="2025-05-15T09:18:08.549797960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 09:18:08.549954 containerd[1474]: time="2025-05-15T09:18:08.549847280Z" level=info msg="Connect containerd service" May 15 09:18:08.549954 containerd[1474]: time="2025-05-15T09:18:08.549873320Z" level=info msg="using legacy CRI server" May 15 09:18:08.549954 containerd[1474]: time="2025-05-15T09:18:08.549880040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 09:18:08.550153 containerd[1474]: time="2025-05-15T09:18:08.550130840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 09:18:08.550733 containerd[1474]: time="2025-05-15T09:18:08.550707600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 09:18:08.555502 containerd[1474]: time="2025-05-15T09:18:08.555450200Z" level=info msg="Start subscribing containerd event" May 15 09:18:08.555570 containerd[1474]: time="2025-05-15T09:18:08.555509960Z" level=info msg="Start recovering state" May 15 09:18:08.555591 containerd[1474]: time="2025-05-15T09:18:08.555584600Z" level=info msg="Start event monitor" May 15 09:18:08.555609 containerd[1474]: time="2025-05-15T09:18:08.555598000Z" level=info msg="Start snapshots syncer" May 15 09:18:08.555626 containerd[1474]: time="2025-05-15T09:18:08.555609000Z" level=info msg="Start cni network conf syncer for default" May 15 09:18:08.555626 containerd[1474]: time="2025-05-15T09:18:08.555622480Z" level=info msg="Start streaming server" May 15 09:18:08.556986 containerd[1474]: time="2025-05-15T09:18:08.556249320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 09:18:08.556986 containerd[1474]: time="2025-05-15T09:18:08.556302080Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 09:18:08.556986 containerd[1474]: time="2025-05-15T09:18:08.556359160Z" level=info msg="containerd successfully booted in 0.049324s" May 15 09:18:08.557060 systemd[1]: Started containerd.service - containerd container runtime. May 15 09:18:08.690403 tar[1471]: linux-arm64/LICENSE May 15 09:18:08.690601 tar[1471]: linux-arm64/README.md May 15 09:18:08.706901 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 09:18:09.945294 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 09:18:09.972213 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 09:18:09.981831 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 09:18:09.991373 systemd[1]: issuegen.service: Deactivated successfully. May 15 09:18:09.992231 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 09:18:10.003359 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 09:18:10.012535 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 09:18:10.017214 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 09:18:10.019488 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 09:18:10.020755 systemd[1]: Reached target getty.target - Login Prompts. May 15 09:18:10.037333 systemd-networkd[1404]: eth0: Gained IPv6LL May 15 09:18:10.039450 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 09:18:10.041581 systemd[1]: Reached target network-online.target - Network is Online. May 15 09:18:10.043952 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 09:18:10.047484 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:10.049615 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 09:18:10.069543 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 09:18:10.076882 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 09:18:10.077063 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 09:18:10.079577 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 09:18:10.533469 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:18:10.533720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:10.535344 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 09:18:10.539695 systemd[1]: Startup finished in 595ms (kernel) + 5.909s (initrd) + 3.949s (userspace) = 10.454s. May 15 09:18:10.992965 kubelet[1559]: E0515 09:18:10.992851 1559 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:18:10.995558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:18:10.995712 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:18:13.590872 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 09:18:13.592001 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:34818.service - OpenSSH per-connection server daemon (10.0.0.1:34818). May 15 09:18:13.729976 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 34818 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:13.732124 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:13.739554 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 09:18:13.749401 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 09:18:13.751280 systemd-logind[1458]: New session 1 of user core. May 15 09:18:13.760178 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 09:18:13.762459 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 09:18:13.770179 (systemd)[1578]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 09:18:13.864098 systemd[1578]: Queued start job for default target default.target. May 15 09:18:13.875267 systemd[1578]: Created slice app.slice - User Application Slice. May 15 09:18:13.875298 systemd[1578]: Reached target paths.target - Paths. May 15 09:18:13.875311 systemd[1578]: Reached target timers.target - Timers. May 15 09:18:13.876649 systemd[1578]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 09:18:13.886961 systemd[1578]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 09:18:13.887029 systemd[1578]: Reached target sockets.target - Sockets. May 15 09:18:13.887041 systemd[1578]: Reached target basic.target - Basic System. May 15 09:18:13.887075 systemd[1578]: Reached target default.target - Main User Target. May 15 09:18:13.887101 systemd[1578]: Startup finished in 109ms. May 15 09:18:13.887473 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 09:18:13.889049 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 09:18:13.950089 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:34824.service - OpenSSH per-connection server daemon (10.0.0.1:34824). May 15 09:18:13.989325 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 34824 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:13.990507 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:13.994695 systemd-logind[1458]: New session 2 of user core. May 15 09:18:14.003299 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 09:18:14.054530 sshd[1591]: Connection closed by 10.0.0.1 port 34824 May 15 09:18:14.054894 sshd-session[1589]: pam_unix(sshd:session): session closed for user core May 15 09:18:14.074267 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:34824.service: Deactivated successfully. May 15 09:18:14.075646 systemd[1]: session-2.scope: Deactivated successfully. May 15 09:18:14.078262 systemd-logind[1458]: Session 2 logged out. Waiting for processes to exit. May 15 09:18:14.086533 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:34830.service - OpenSSH per-connection server daemon (10.0.0.1:34830). May 15 09:18:14.087338 systemd-logind[1458]: Removed session 2. May 15 09:18:14.121675 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 34830 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:14.122815 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:14.126325 systemd-logind[1458]: New session 3 of user core. May 15 09:18:14.134291 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 09:18:14.181271 sshd[1598]: Connection closed by 10.0.0.1 port 34830 May 15 09:18:14.182079 sshd-session[1596]: pam_unix(sshd:session): session closed for user core May 15 09:18:14.196740 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:34830.service: Deactivated successfully. May 15 09:18:14.197982 systemd[1]: session-3.scope: Deactivated successfully. May 15 09:18:14.199325 systemd-logind[1458]: Session 3 logged out. Waiting for processes to exit. May 15 09:18:14.200425 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:34834.service - OpenSSH per-connection server daemon (10.0.0.1:34834). May 15 09:18:14.201270 systemd-logind[1458]: Removed session 3. May 15 09:18:14.239203 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 34834 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:14.240426 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:14.244509 systemd-logind[1458]: New session 4 of user core. May 15 09:18:14.260311 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 09:18:14.311665 sshd[1605]: Connection closed by 10.0.0.1 port 34834 May 15 09:18:14.312161 sshd-session[1603]: pam_unix(sshd:session): session closed for user core May 15 09:18:14.320400 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:34834.service: Deactivated successfully. May 15 09:18:14.322109 systemd[1]: session-4.scope: Deactivated successfully. May 15 09:18:14.325301 systemd-logind[1458]: Session 4 logged out. Waiting for processes to exit. May 15 09:18:14.326401 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:34836.service - OpenSSH per-connection server daemon (10.0.0.1:34836). May 15 09:18:14.327067 systemd-logind[1458]: Removed session 4. May 15 09:18:14.372662 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 34836 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:14.374897 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:14.378821 systemd-logind[1458]: New session 5 of user core. May 15 09:18:14.393300 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 09:18:14.454685 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 09:18:14.455036 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:18:14.467991 sudo[1613]: pam_unix(sudo:session): session closed for user root May 15 09:18:14.469349 sshd[1612]: Connection closed by 10.0.0.1 port 34836 May 15 09:18:14.469755 sshd-session[1610]: pam_unix(sshd:session): session closed for user core May 15 09:18:14.478446 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:34836.service: Deactivated successfully. May 15 09:18:14.480595 systemd[1]: session-5.scope: Deactivated successfully. May 15 09:18:14.482018 systemd-logind[1458]: Session 5 logged out. Waiting for processes to exit. May 15 09:18:14.503761 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:34848.service - OpenSSH per-connection server daemon (10.0.0.1:34848). May 15 09:18:14.504880 systemd-logind[1458]: Removed session 5. May 15 09:18:14.538663 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 34848 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:14.539918 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:14.543785 systemd-logind[1458]: New session 6 of user core. May 15 09:18:14.555339 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 09:18:14.606502 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 09:18:14.606768 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:18:14.610300 sudo[1622]: pam_unix(sudo:session): session closed for user root May 15 09:18:14.615338 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 09:18:14.615628 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:18:14.634775 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 09:18:14.669222 augenrules[1644]: No rules May 15 09:18:14.670142 systemd[1]: audit-rules.service: Deactivated successfully. May 15 09:18:14.670454 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 09:18:14.672352 sudo[1621]: pam_unix(sudo:session): session closed for user root May 15 09:18:14.673636 sshd[1620]: Connection closed by 10.0.0.1 port 34848 May 15 09:18:14.673953 sshd-session[1618]: pam_unix(sshd:session): session closed for user core May 15 09:18:14.684470 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:34848.service: Deactivated successfully. May 15 09:18:14.686108 systemd[1]: session-6.scope: Deactivated successfully. May 15 09:18:14.689076 systemd-logind[1458]: Session 6 logged out. Waiting for processes to exit. May 15 09:18:14.702534 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:34862.service - OpenSSH per-connection server daemon (10.0.0.1:34862). May 15 09:18:14.704088 systemd-logind[1458]: Removed session 6. May 15 09:18:14.738212 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 34862 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:18:14.739470 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:18:14.743352 systemd-logind[1458]: New session 7 of user core. May 15 09:18:14.757307 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 09:18:14.810171 sudo[1655]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 09:18:14.810470 sudo[1655]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 09:18:15.135440 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 09:18:15.135517 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 09:18:15.390240 dockerd[1676]: time="2025-05-15T09:18:15.390106303Z" level=info msg="Starting up" May 15 09:18:15.565394 dockerd[1676]: time="2025-05-15T09:18:15.565354613Z" level=info msg="Loading containers: start." May 15 09:18:15.711253 kernel: Initializing XFRM netlink socket May 15 09:18:15.780700 systemd-networkd[1404]: docker0: Link UP May 15 09:18:15.817211 dockerd[1676]: time="2025-05-15T09:18:15.817172549Z" level=info msg="Loading containers: done." May 15 09:18:15.829262 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck726934662-merged.mount: Deactivated successfully. May 15 09:18:15.831233 dockerd[1676]: time="2025-05-15T09:18:15.830631645Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 09:18:15.831233 dockerd[1676]: time="2025-05-15T09:18:15.830707334Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 09:18:15.831233 dockerd[1676]: time="2025-05-15T09:18:15.830795631Z" level=info msg="Daemon has completed initialization" May 15 09:18:15.857339 dockerd[1676]: time="2025-05-15T09:18:15.857269725Z" level=info msg="API listen on /run/docker.sock" May 15 09:18:15.859315 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 09:18:16.717311 containerd[1474]: time="2025-05-15T09:18:16.717271999Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 09:18:17.404434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount464710269.mount: Deactivated successfully. May 15 09:18:18.889633 containerd[1474]: time="2025-05-15T09:18:18.889582480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:18.890198 containerd[1474]: time="2025-05-15T09:18:18.890136344Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" May 15 09:18:18.890885 containerd[1474]: time="2025-05-15T09:18:18.890861228Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:18.895169 containerd[1474]: time="2025-05-15T09:18:18.893764332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:18.896061 containerd[1474]: time="2025-05-15T09:18:18.896033279Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 2.178688114s" May 15 09:18:18.896166 containerd[1474]: time="2025-05-15T09:18:18.896139694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" May 15 09:18:18.914582 containerd[1474]: time="2025-05-15T09:18:18.914550564Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 09:18:20.682067 containerd[1474]: time="2025-05-15T09:18:20.681814157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:20.682917 containerd[1474]: time="2025-05-15T09:18:20.682683805Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" May 15 09:18:20.684889 containerd[1474]: time="2025-05-15T09:18:20.683564163Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:20.686708 containerd[1474]: time="2025-05-15T09:18:20.686674738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:20.687947 containerd[1474]: time="2025-05-15T09:18:20.687823880Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.773064502s" May 15 09:18:20.687947 containerd[1474]: time="2025-05-15T09:18:20.687854004Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" May 15 09:18:20.705465 containerd[1474]: time="2025-05-15T09:18:20.705437941Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 09:18:21.246113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 09:18:21.261351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:21.352395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:21.355810 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:18:21.403401 kubelet[1959]: E0515 09:18:21.403344 1959 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:18:21.406724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:18:21.406911 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:18:21.766805 containerd[1474]: time="2025-05-15T09:18:21.766733811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:21.767308 containerd[1474]: time="2025-05-15T09:18:21.767269442Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" May 15 09:18:21.768494 containerd[1474]: time="2025-05-15T09:18:21.768458449Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:21.771877 containerd[1474]: time="2025-05-15T09:18:21.771837304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:21.773241 containerd[1474]: time="2025-05-15T09:18:21.773201687Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.067730976s" May 15 09:18:21.773274 containerd[1474]: time="2025-05-15T09:18:21.773239264Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" May 15 09:18:21.791448 containerd[1474]: time="2025-05-15T09:18:21.791411213Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 09:18:23.014671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528268701.mount: Deactivated successfully. May 15 09:18:23.355046 containerd[1474]: time="2025-05-15T09:18:23.354929324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:23.356032 containerd[1474]: time="2025-05-15T09:18:23.355983331Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 15 09:18:23.356878 containerd[1474]: time="2025-05-15T09:18:23.356849388Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:23.359298 containerd[1474]: time="2025-05-15T09:18:23.359268671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:23.360619 containerd[1474]: time="2025-05-15T09:18:23.360587322Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.569140739s" May 15 09:18:23.360661 containerd[1474]: time="2025-05-15T09:18:23.360619956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 15 09:18:23.378497 containerd[1474]: time="2025-05-15T09:18:23.378469749Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 09:18:23.909568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613997858.mount: Deactivated successfully. May 15 09:18:24.710061 containerd[1474]: time="2025-05-15T09:18:24.709995090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:24.711082 containerd[1474]: time="2025-05-15T09:18:24.711033673Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 15 09:18:24.712109 containerd[1474]: time="2025-05-15T09:18:24.712069851Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:24.715086 containerd[1474]: time="2025-05-15T09:18:24.715033115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:24.716327 containerd[1474]: time="2025-05-15T09:18:24.716287961Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.337784056s" May 15 09:18:24.716378 containerd[1474]: time="2025-05-15T09:18:24.716328688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 15 09:18:24.734424 containerd[1474]: time="2025-05-15T09:18:24.734378968Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 09:18:25.241804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702205897.mount: Deactivated successfully. May 15 09:18:25.246487 containerd[1474]: time="2025-05-15T09:18:25.245814357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:25.246487 containerd[1474]: time="2025-05-15T09:18:25.246459332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" May 15 09:18:25.247191 containerd[1474]: time="2025-05-15T09:18:25.247159657Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:25.249990 containerd[1474]: time="2025-05-15T09:18:25.249956351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:25.251555 containerd[1474]: time="2025-05-15T09:18:25.251523096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 517.104444ms" May 15 09:18:25.251596 containerd[1474]: time="2025-05-15T09:18:25.251554639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" May 15 09:18:25.269361 containerd[1474]: time="2025-05-15T09:18:25.269308995Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 09:18:25.825075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3858445229.mount: Deactivated successfully. May 15 09:18:28.312361 containerd[1474]: time="2025-05-15T09:18:28.312305669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:28.312908 containerd[1474]: time="2025-05-15T09:18:28.312853615Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" May 15 09:18:28.313641 containerd[1474]: time="2025-05-15T09:18:28.313616357Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:28.316546 containerd[1474]: time="2025-05-15T09:18:28.316521964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:18:28.318857 containerd[1474]: time="2025-05-15T09:18:28.318818844Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.049461155s" May 15 09:18:28.318940 containerd[1474]: time="2025-05-15T09:18:28.318858790Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" May 15 09:18:31.657095 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 09:18:31.666473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:31.799564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:31.803866 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 09:18:31.845651 kubelet[2182]: E0515 09:18:31.845590 2182 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 09:18:31.848390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 09:18:31.848684 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 09:18:34.149715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:34.162450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:34.179067 systemd[1]: Reloading requested from client PID 2197 ('systemctl') (unit session-7.scope)... May 15 09:18:34.179252 systemd[1]: Reloading... May 15 09:18:34.251186 zram_generator::config[2239]: No configuration found. May 15 09:18:34.466622 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:18:34.520178 systemd[1]: Reloading finished in 340 ms. May 15 09:18:34.566696 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 09:18:34.566796 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 09:18:34.567065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:34.568891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:34.666056 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:34.670669 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 09:18:34.711826 kubelet[2282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:18:34.711826 kubelet[2282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 09:18:34.711826 kubelet[2282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:18:34.712568 kubelet[2282]: I0515 09:18:34.712530 2282 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 09:18:35.660274 kubelet[2282]: I0515 09:18:35.660230 2282 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 09:18:35.660274 kubelet[2282]: I0515 09:18:35.660263 2282 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 09:18:35.660483 kubelet[2282]: I0515 09:18:35.660469 2282 server.go:927] "Client rotation is on, will bootstrap in background" May 15 09:18:35.702724 kubelet[2282]: E0515 09:18:35.702690 2282 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.703204 kubelet[2282]: I0515 09:18:35.703190 2282 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:18:35.712712 kubelet[2282]: I0515 09:18:35.712680 2282 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 09:18:35.713877 kubelet[2282]: I0515 09:18:35.713826 2282 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 09:18:35.714064 kubelet[2282]: I0515 09:18:35.713875 2282 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 09:18:35.714220 kubelet[2282]: I0515 09:18:35.714195 2282 topology_manager.go:138] "Creating topology manager with none policy" May 15 09:18:35.714220 kubelet[2282]: I0515 09:18:35.714209 2282 container_manager_linux.go:301] "Creating device plugin manager" May 15 09:18:35.714603 kubelet[2282]: I0515 09:18:35.714582 2282 state_mem.go:36] "Initialized new in-memory state store" May 15 09:18:35.715659 kubelet[2282]: I0515 09:18:35.715637 2282 kubelet.go:400] "Attempting to sync node with API server" May 15 09:18:35.715707 kubelet[2282]: I0515 09:18:35.715663 2282 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 09:18:35.717185 kubelet[2282]: I0515 09:18:35.715871 2282 kubelet.go:312] "Adding apiserver pod source" May 15 09:18:35.717185 kubelet[2282]: I0515 09:18:35.715992 2282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 09:18:35.718489 kubelet[2282]: W0515 09:18:35.718426 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.718530 kubelet[2282]: E0515 09:18:35.718503 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.720369 kubelet[2282]: W0515 09:18:35.720318 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.720417 kubelet[2282]: E0515 09:18:35.720372 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.720589 kubelet[2282]: I0515 09:18:35.720559 2282 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 09:18:35.720970 kubelet[2282]: I0515 09:18:35.720950 2282 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 09:18:35.721091 kubelet[2282]: W0515 09:18:35.721080 2282 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 09:18:35.722150 kubelet[2282]: I0515 09:18:35.722109 2282 server.go:1264] "Started kubelet" May 15 09:18:35.723240 kubelet[2282]: I0515 09:18:35.723172 2282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 09:18:35.723280 kubelet[2282]: I0515 09:18:35.723242 2282 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 09:18:35.723410 kubelet[2282]: I0515 09:18:35.723382 2282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 09:18:35.723410 kubelet[2282]: I0515 09:18:35.723403 2282 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 09:18:35.724416 kubelet[2282]: I0515 09:18:35.724374 2282 server.go:455] "Adding debug handlers to kubelet server" May 15 09:18:35.724746 kubelet[2282]: I0515 09:18:35.724714 2282 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 09:18:35.724854 kubelet[2282]: I0515 09:18:35.724832 2282 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 09:18:35.727051 kubelet[2282]: E0515 09:18:35.726413 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" May 15 09:18:35.727051 kubelet[2282]: I0515 09:18:35.726824 2282 reconciler.go:26] "Reconciler: start to sync state" May 15 09:18:35.727579 kubelet[2282]: W0515 09:18:35.727536 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.727687 kubelet[2282]: E0515 09:18:35.727675 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.728698 kubelet[2282]: I0515 09:18:35.728677 2282 factory.go:221] Registration of the systemd container factory successfully May 15 09:18:35.729419 kubelet[2282]: I0515 09:18:35.729197 2282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 09:18:35.729419 kubelet[2282]: E0515 09:18:35.729314 2282 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 09:18:35.729419 kubelet[2282]: E0515 09:18:35.725331 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fa8be4c669ac3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 09:18:35.722087107 +0000 UTC m=+1.048234426,LastTimestamp:2025-05-15 09:18:35.722087107 +0000 UTC m=+1.048234426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 09:18:35.731432 kubelet[2282]: I0515 09:18:35.730965 2282 factory.go:221] Registration of the containerd container factory successfully May 15 09:18:35.742663 kubelet[2282]: I0515 09:18:35.742605 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 09:18:35.743627 kubelet[2282]: I0515 09:18:35.743602 2282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 09:18:35.744025 kubelet[2282]: I0515 09:18:35.743753 2282 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 09:18:35.744025 kubelet[2282]: I0515 09:18:35.743774 2282 kubelet.go:2337] "Starting kubelet main sync loop" May 15 09:18:35.744025 kubelet[2282]: E0515 09:18:35.743815 2282 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 09:18:35.744507 kubelet[2282]: W0515 09:18:35.744476 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.744569 kubelet[2282]: E0515 09:18:35.744516 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:35.746671 kubelet[2282]: I0515 09:18:35.746652 2282 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 09:18:35.746794 kubelet[2282]: I0515 09:18:35.746780 2282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 09:18:35.746908 kubelet[2282]: I0515 09:18:35.746897 2282 state_mem.go:36] "Initialized new in-memory state store" May 15 09:18:35.816115 kubelet[2282]: I0515 09:18:35.816029 2282 policy_none.go:49] "None policy: Start" May 15 09:18:35.816945 kubelet[2282]: I0515 09:18:35.816924 2282 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 09:18:35.817002 kubelet[2282]: I0515 09:18:35.816956 2282 state_mem.go:35] "Initializing new in-memory state store" May 15 09:18:35.823399 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 15 09:18:35.826202 kubelet[2282]: I0515 09:18:35.826177 2282 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 09:18:35.826515 kubelet[2282]: E0515 09:18:35.826494 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" May 15 09:18:35.841289 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 15 09:18:35.843886 kubelet[2282]: E0515 09:18:35.843859 2282 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 09:18:35.844137 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 15 09:18:35.855032 kubelet[2282]: I0515 09:18:35.854999 2282 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 09:18:35.855427 kubelet[2282]: I0515 09:18:35.855232 2282 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 09:18:35.855427 kubelet[2282]: I0515 09:18:35.855360 2282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 09:18:35.856805 kubelet[2282]: E0515 09:18:35.856777 2282 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 09:18:35.927952 kubelet[2282]: E0515 09:18:35.927817 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" May 15 09:18:36.028263 kubelet[2282]: I0515 09:18:36.028202 2282 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 09:18:36.028605 kubelet[2282]: E0515 09:18:36.028555 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" May 15 09:18:36.044924 kubelet[2282]: I0515 09:18:36.044813 2282 topology_manager.go:215] "Topology Admit Handler" podUID="f11c1a596cc34caf0a7a3c821540240a" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 09:18:36.045967 kubelet[2282]: I0515 09:18:36.045941 2282 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 09:18:36.047252 kubelet[2282]: I0515 09:18:36.047226 2282 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 09:18:36.052527 systemd[1]: Created slice kubepods-burstable-podf11c1a596cc34caf0a7a3c821540240a.slice - libcontainer container kubepods-burstable-podf11c1a596cc34caf0a7a3c821540240a.slice. May 15 09:18:36.074756 systemd[1]: Created slice kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice - libcontainer container kubepods-burstable-podb20b39a8540dba87b5883a6f0f602dba.slice. May 15 09:18:36.078470 systemd[1]: Created slice kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice - libcontainer container kubepods-burstable-pod6ece95f10dbffa04b25ec3439a115512.slice. May 15 09:18:36.128216 kubelet[2282]: I0515 09:18:36.128169 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f11c1a596cc34caf0a7a3c821540240a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f11c1a596cc34caf0a7a3c821540240a\") " pod="kube-system/kube-apiserver-localhost" May 15 09:18:36.128216 kubelet[2282]: I0515 09:18:36.128213 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:36.128364 kubelet[2282]: I0515 09:18:36.128236 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:36.128364 kubelet[2282]: I0515 09:18:36.128255 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:36.128364 kubelet[2282]: I0515 09:18:36.128279 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 09:18:36.128364 kubelet[2282]: I0515 09:18:36.128295 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f11c1a596cc34caf0a7a3c821540240a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f11c1a596cc34caf0a7a3c821540240a\") " pod="kube-system/kube-apiserver-localhost" May 15 09:18:36.128364 kubelet[2282]: I0515 09:18:36.128311 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f11c1a596cc34caf0a7a3c821540240a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f11c1a596cc34caf0a7a3c821540240a\") " pod="kube-system/kube-apiserver-localhost" May 15 09:18:36.128466 kubelet[2282]: I0515 09:18:36.128331 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:36.128466 kubelet[2282]: I0515 09:18:36.128354 2282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:36.216162 kubelet[2282]: E0515 09:18:36.215955 2282 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fa8be4c669ac3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 09:18:35.722087107 +0000 UTC m=+1.048234426,LastTimestamp:2025-05-15 09:18:35.722087107 +0000 UTC m=+1.048234426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 09:18:36.329269 kubelet[2282]: E0515 09:18:36.329214 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" May 15 09:18:36.372859 kubelet[2282]: E0515 09:18:36.372788 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:36.373636 containerd[1474]: time="2025-05-15T09:18:36.373554329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f11c1a596cc34caf0a7a3c821540240a,Namespace:kube-system,Attempt:0,}" May 15 09:18:36.377225 kubelet[2282]: E0515 09:18:36.377186 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:36.377805 containerd[1474]: time="2025-05-15T09:18:36.377765009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 09:18:36.381254 kubelet[2282]: E0515 09:18:36.381203 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:36.381892 containerd[1474]: time="2025-05-15T09:18:36.381645922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 09:18:36.430128 kubelet[2282]: I0515 09:18:36.430098 2282 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 09:18:36.430477 kubelet[2282]: E0515 09:18:36.430439 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" May 15 09:18:36.635258 kubelet[2282]: W0515 09:18:36.635085 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:36.635258 kubelet[2282]: E0515 09:18:36.635187 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:36.673597 kubelet[2282]: W0515 09:18:36.673550 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:36.673597 kubelet[2282]: E0515 09:18:36.673596 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:36.905379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162691332.mount: Deactivated successfully. May 15 09:18:36.910712 containerd[1474]: time="2025-05-15T09:18:36.910662331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:18:36.912631 containerd[1474]: time="2025-05-15T09:18:36.912580466Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 09:18:36.913307 containerd[1474]: time="2025-05-15T09:18:36.913266063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:18:36.914345 containerd[1474]: time="2025-05-15T09:18:36.914253278Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:18:36.915116 containerd[1474]: time="2025-05-15T09:18:36.915076972Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 09:18:36.916023 containerd[1474]: time="2025-05-15T09:18:36.915965450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:18:36.922770 containerd[1474]: time="2025-05-15T09:18:36.922710913Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 09:18:36.924200 containerd[1474]: time="2025-05-15T09:18:36.924132278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 09:18:36.925186 containerd[1474]: time="2025-05-15T09:18:36.924958614Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.074968ms" May 15 09:18:36.928464 containerd[1474]: time="2025-05-15T09:18:36.928411585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.775895ms" May 15 09:18:36.930068 containerd[1474]: time="2025-05-15T09:18:36.929837474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.120041ms" May 15 09:18:36.967804 kubelet[2282]: W0515 09:18:36.967648 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:36.967804 kubelet[2282]: E0515 09:18:36.967721 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:37.065595 containerd[1474]: time="2025-05-15T09:18:37.065476075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:18:37.065595 containerd[1474]: time="2025-05-15T09:18:37.065547421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:18:37.065862 containerd[1474]: time="2025-05-15T09:18:37.065564717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:37.065862 containerd[1474]: time="2025-05-15T09:18:37.065707049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:37.066130 containerd[1474]: time="2025-05-15T09:18:37.066055091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:18:37.066267 containerd[1474]: time="2025-05-15T09:18:37.066122994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:18:37.066267 containerd[1474]: time="2025-05-15T09:18:37.066136366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:37.066267 containerd[1474]: time="2025-05-15T09:18:37.066228131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:37.075093 containerd[1474]: time="2025-05-15T09:18:37.073515561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:18:37.075093 containerd[1474]: time="2025-05-15T09:18:37.073573695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:18:37.075093 containerd[1474]: time="2025-05-15T09:18:37.073586026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:37.075093 containerd[1474]: time="2025-05-15T09:18:37.073657412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:37.092353 systemd[1]: Started cri-containerd-334a44b77e07c2eff5cc92d2b546064290d416c56d779aea7dc33a2a0f4599e9.scope - libcontainer container 334a44b77e07c2eff5cc92d2b546064290d416c56d779aea7dc33a2a0f4599e9. May 15 09:18:37.097055 systemd[1]: Started cri-containerd-36447aa8a70e79abd383701ecf18aa50fd36a3003e9939c01ee0d3d16cf853f2.scope - libcontainer container 36447aa8a70e79abd383701ecf18aa50fd36a3003e9939c01ee0d3d16cf853f2. May 15 09:18:37.098965 systemd[1]: Started cri-containerd-cfc790d1205e138d75060dd8a7865cbf3f9d29df68c09fb39a6d195925ad9eb2.scope - libcontainer container cfc790d1205e138d75060dd8a7865cbf3f9d29df68c09fb39a6d195925ad9eb2. May 15 09:18:37.127900 containerd[1474]: time="2025-05-15T09:18:37.127763205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f11c1a596cc34caf0a7a3c821540240a,Namespace:kube-system,Attempt:0,} returns sandbox id \"334a44b77e07c2eff5cc92d2b546064290d416c56d779aea7dc33a2a0f4599e9\"" May 15 09:18:37.129976 kubelet[2282]: E0515 09:18:37.129925 2282 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" May 15 09:18:37.130738 kubelet[2282]: E0515 09:18:37.130701 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:37.136996 containerd[1474]: time="2025-05-15T09:18:37.135922522Z" level=info msg="CreateContainer within sandbox \"334a44b77e07c2eff5cc92d2b546064290d416c56d779aea7dc33a2a0f4599e9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 09:18:37.136996 containerd[1474]: time="2025-05-15T09:18:37.136831484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"36447aa8a70e79abd383701ecf18aa50fd36a3003e9939c01ee0d3d16cf853f2\"" May 15 09:18:37.137755 kubelet[2282]: E0515 09:18:37.137688 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:37.141535 containerd[1474]: time="2025-05-15T09:18:37.141495284Z" level=info msg="CreateContainer within sandbox \"36447aa8a70e79abd383701ecf18aa50fd36a3003e9939c01ee0d3d16cf853f2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 09:18:37.143869 containerd[1474]: time="2025-05-15T09:18:37.143839815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfc790d1205e138d75060dd8a7865cbf3f9d29df68c09fb39a6d195925ad9eb2\"" May 15 09:18:37.144691 kubelet[2282]: E0515 09:18:37.144639 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:37.148049 containerd[1474]: time="2025-05-15T09:18:37.147986776Z" level=info msg="CreateContainer within sandbox \"cfc790d1205e138d75060dd8a7865cbf3f9d29df68c09fb39a6d195925ad9eb2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 09:18:37.161214 containerd[1474]: time="2025-05-15T09:18:37.160203131Z" level=info msg="CreateContainer within sandbox \"334a44b77e07c2eff5cc92d2b546064290d416c56d779aea7dc33a2a0f4599e9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a44bec2dc71b7ae8f7491dcd100dc0406ca8d80a4d7b25c62ed0fb94e880cc45\"" May 15 09:18:37.161214 containerd[1474]: time="2025-05-15T09:18:37.161073697Z" level=info msg="StartContainer for \"a44bec2dc71b7ae8f7491dcd100dc0406ca8d80a4d7b25c62ed0fb94e880cc45\"" May 15 09:18:37.168403 containerd[1474]: time="2025-05-15T09:18:37.168250505Z" level=info msg="CreateContainer within sandbox \"36447aa8a70e79abd383701ecf18aa50fd36a3003e9939c01ee0d3d16cf853f2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5039046d51552233f879571cd3228026cbeea2d568300f34d75c0c4d96a2ba1c\"" May 15 09:18:37.169649 containerd[1474]: time="2025-05-15T09:18:37.169381993Z" level=info msg="StartContainer for \"5039046d51552233f879571cd3228026cbeea2d568300f34d75c0c4d96a2ba1c\"" May 15 09:18:37.169852 containerd[1474]: time="2025-05-15T09:18:37.169815594Z" level=info msg="CreateContainer within sandbox \"cfc790d1205e138d75060dd8a7865cbf3f9d29df68c09fb39a6d195925ad9eb2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc984c1c0dc70b7a999622ae90cbf05a924c609c93354d3504f7d0de0bacb27d\"" May 15 09:18:37.170397 containerd[1474]: time="2025-05-15T09:18:37.170371909Z" level=info msg="StartContainer for \"bc984c1c0dc70b7a999622ae90cbf05a924c609c93354d3504f7d0de0bacb27d\"" May 15 09:18:37.184108 kubelet[2282]: W0515 09:18:37.184045 2282 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:37.184294 kubelet[2282]: E0515 09:18:37.184199 2282 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused May 15 09:18:37.189365 systemd[1]: Started cri-containerd-a44bec2dc71b7ae8f7491dcd100dc0406ca8d80a4d7b25c62ed0fb94e880cc45.scope - libcontainer container a44bec2dc71b7ae8f7491dcd100dc0406ca8d80a4d7b25c62ed0fb94e880cc45. May 15 09:18:37.191956 systemd[1]: Started cri-containerd-bc984c1c0dc70b7a999622ae90cbf05a924c609c93354d3504f7d0de0bacb27d.scope - libcontainer container bc984c1c0dc70b7a999622ae90cbf05a924c609c93354d3504f7d0de0bacb27d. May 15 09:18:37.197304 systemd[1]: Started cri-containerd-5039046d51552233f879571cd3228026cbeea2d568300f34d75c0c4d96a2ba1c.scope - libcontainer container 5039046d51552233f879571cd3228026cbeea2d568300f34d75c0c4d96a2ba1c. May 15 09:18:37.232087 containerd[1474]: time="2025-05-15T09:18:37.232048194Z" level=info msg="StartContainer for \"a44bec2dc71b7ae8f7491dcd100dc0406ca8d80a4d7b25c62ed0fb94e880cc45\" returns successfully" May 15 09:18:37.233372 kubelet[2282]: I0515 09:18:37.232391 2282 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 09:18:37.235970 kubelet[2282]: E0515 09:18:37.235908 2282 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" May 15 09:18:37.236125 containerd[1474]: time="2025-05-15T09:18:37.236079488Z" level=info msg="StartContainer for \"bc984c1c0dc70b7a999622ae90cbf05a924c609c93354d3504f7d0de0bacb27d\" returns successfully" May 15 09:18:37.249878 containerd[1474]: time="2025-05-15T09:18:37.244887045Z" level=info msg="StartContainer for \"5039046d51552233f879571cd3228026cbeea2d568300f34d75c0c4d96a2ba1c\" returns successfully" May 15 09:18:37.752185 kubelet[2282]: E0515 09:18:37.752126 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:37.757512 kubelet[2282]: E0515 09:18:37.757486 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:37.764899 kubelet[2282]: E0515 09:18:37.764845 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:38.766152 kubelet[2282]: E0515 09:18:38.766113 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:38.837928 kubelet[2282]: I0515 09:18:38.837624 2282 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 09:18:39.118794 kubelet[2282]: E0515 09:18:39.118624 2282 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 15 09:18:39.202908 kubelet[2282]: I0515 09:18:39.202608 2282 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 09:18:39.719931 kubelet[2282]: I0515 09:18:39.719890 2282 apiserver.go:52] "Watching apiserver" May 15 09:18:39.725333 kubelet[2282]: I0515 09:18:39.725290 2282 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 09:18:40.817505 kubelet[2282]: E0515 09:18:40.817473 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:41.620558 systemd[1]: Reloading requested from client PID 2560 ('systemctl') (unit session-7.scope)... May 15 09:18:41.620575 systemd[1]: Reloading... May 15 09:18:41.696192 zram_generator::config[2605]: No configuration found. May 15 09:18:41.770653 kubelet[2282]: E0515 09:18:41.768981 2282 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:41.781235 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 09:18:41.845836 systemd[1]: Reloading finished in 224 ms. May 15 09:18:41.882981 kubelet[2282]: I0515 09:18:41.882866 2282 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:18:41.883190 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:41.893221 systemd[1]: kubelet.service: Deactivated successfully. May 15 09:18:41.894241 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:41.894301 systemd[1]: kubelet.service: Consumed 1.443s CPU time, 115.2M memory peak, 0B memory swap peak. May 15 09:18:41.902860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 09:18:41.993503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 09:18:41.998297 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 09:18:42.044978 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:18:42.044978 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 09:18:42.044978 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 09:18:42.045431 kubelet[2641]: I0515 09:18:42.045021 2641 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 09:18:42.049795 kubelet[2641]: I0515 09:18:42.049750 2641 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 09:18:42.049795 kubelet[2641]: I0515 09:18:42.049781 2641 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 09:18:42.050214 kubelet[2641]: I0515 09:18:42.050188 2641 server.go:927] "Client rotation is on, will bootstrap in background" May 15 09:18:42.052101 kubelet[2641]: I0515 09:18:42.052065 2641 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 09:18:42.054171 kubelet[2641]: I0515 09:18:42.053991 2641 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 09:18:42.061052 kubelet[2641]: I0515 09:18:42.061023 2641 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 09:18:42.061286 kubelet[2641]: I0515 09:18:42.061252 2641 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 09:18:42.061468 kubelet[2641]: I0515 09:18:42.061288 2641 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 09:18:42.061547 kubelet[2641]: I0515 09:18:42.061476 2641 topology_manager.go:138] "Creating topology manager with none policy" May 15 09:18:42.061547 kubelet[2641]: I0515 09:18:42.061486 2641 container_manager_linux.go:301] "Creating device plugin manager" May 15 09:18:42.061599 kubelet[2641]: I0515 09:18:42.061551 2641 state_mem.go:36] "Initialized new in-memory state store" May 15 09:18:42.061677 kubelet[2641]: I0515 09:18:42.061666 2641 kubelet.go:400] "Attempting to sync node with API server" May 15 09:18:42.061706 kubelet[2641]: I0515 09:18:42.061681 2641 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 09:18:42.061738 kubelet[2641]: I0515 09:18:42.061709 2641 kubelet.go:312] "Adding apiserver pod source" May 15 09:18:42.061738 kubelet[2641]: I0515 09:18:42.061724 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 09:18:42.063337 kubelet[2641]: I0515 09:18:42.063300 2641 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 09:18:42.063535 kubelet[2641]: I0515 09:18:42.063513 2641 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 09:18:42.063936 kubelet[2641]: I0515 09:18:42.063918 2641 server.go:1264] "Started kubelet" May 15 09:18:42.064107 kubelet[2641]: I0515 09:18:42.064070 2641 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 09:18:42.070188 kubelet[2641]: I0515 09:18:42.067061 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 09:18:42.072932 kubelet[2641]: I0515 09:18:42.072896 2641 server.go:455] "Adding debug handlers to kubelet server" May 15 09:18:42.073436 kubelet[2641]: I0515 09:18:42.073402 2641 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 09:18:42.073501 kubelet[2641]: I0515 09:18:42.068341 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 09:18:42.073793 kubelet[2641]: I0515 09:18:42.073760 2641 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 09:18:42.073895 kubelet[2641]: I0515 09:18:42.073880 2641 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 09:18:42.074134 kubelet[2641]: I0515 09:18:42.074109 2641 reconciler.go:26] "Reconciler: start to sync state" May 15 09:18:42.093371 kubelet[2641]: I0515 09:18:42.091643 2641 factory.go:221] Registration of the systemd container factory successfully May 15 09:18:42.093371 kubelet[2641]: I0515 09:18:42.091755 2641 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 09:18:42.095958 kubelet[2641]: I0515 09:18:42.094403 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 09:18:42.095958 kubelet[2641]: E0515 09:18:42.094934 2641 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 09:18:42.095958 kubelet[2641]: I0515 09:18:42.095271 2641 factory.go:221] Registration of the containerd container factory successfully May 15 09:18:42.095958 kubelet[2641]: I0515 09:18:42.095611 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 09:18:42.095958 kubelet[2641]: I0515 09:18:42.095640 2641 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 09:18:42.095958 kubelet[2641]: I0515 09:18:42.095662 2641 kubelet.go:2337] "Starting kubelet main sync loop" May 15 09:18:42.095958 kubelet[2641]: E0515 09:18:42.095703 2641 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 09:18:42.128806 kubelet[2641]: I0515 09:18:42.128778 2641 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 09:18:42.128971 kubelet[2641]: I0515 09:18:42.128956 2641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 09:18:42.129033 kubelet[2641]: I0515 09:18:42.129025 2641 state_mem.go:36] "Initialized new in-memory state store" May 15 09:18:42.129285 kubelet[2641]: I0515 09:18:42.129266 2641 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 09:18:42.129380 kubelet[2641]: I0515 09:18:42.129353 2641 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 09:18:42.129431 kubelet[2641]: I0515 09:18:42.129422 2641 policy_none.go:49] "None policy: Start" May 15 09:18:42.130640 kubelet[2641]: I0515 09:18:42.130602 2641 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 09:18:42.130640 kubelet[2641]: I0515 09:18:42.130643 2641 state_mem.go:35] "Initializing new in-memory state store" May 15 09:18:42.130858 kubelet[2641]: I0515 09:18:42.130838 2641 state_mem.go:75] "Updated machine memory state" May 15 09:18:42.136325 kubelet[2641]: I0515 09:18:42.136234 2641 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 09:18:42.136472 kubelet[2641]: I0515 09:18:42.136426 2641 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 09:18:42.136565 kubelet[2641]: I0515 09:18:42.136549 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 09:18:42.178922 kubelet[2641]: I0515 09:18:42.178892 2641 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 09:18:42.186479 kubelet[2641]: I0515 09:18:42.186435 2641 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 09:18:42.186580 kubelet[2641]: I0515 09:18:42.186572 2641 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 09:18:42.196001 kubelet[2641]: I0515 09:18:42.195947 2641 topology_manager.go:215] "Topology Admit Handler" podUID="f11c1a596cc34caf0a7a3c821540240a" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 09:18:42.196139 kubelet[2641]: I0515 09:18:42.196074 2641 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 09:18:42.196139 kubelet[2641]: I0515 09:18:42.196114 2641 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 09:18:42.203584 kubelet[2641]: E0515 09:18:42.203518 2641 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 09:18:42.274925 kubelet[2641]: I0515 09:18:42.274890 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:42.274925 kubelet[2641]: I0515 09:18:42.274928 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:42.275091 kubelet[2641]: I0515 09:18:42.274950 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:42.275091 kubelet[2641]: I0515 09:18:42.274983 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 09:18:42.275091 kubelet[2641]: I0515 09:18:42.275001 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f11c1a596cc34caf0a7a3c821540240a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f11c1a596cc34caf0a7a3c821540240a\") " pod="kube-system/kube-apiserver-localhost" May 15 09:18:42.275091 kubelet[2641]: I0515 09:18:42.275016 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:42.275091 kubelet[2641]: I0515 09:18:42.275036 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 09:18:42.275795 kubelet[2641]: I0515 09:18:42.275052 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f11c1a596cc34caf0a7a3c821540240a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f11c1a596cc34caf0a7a3c821540240a\") " pod="kube-system/kube-apiserver-localhost" May 15 09:18:42.275795 kubelet[2641]: I0515 09:18:42.275216 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f11c1a596cc34caf0a7a3c821540240a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f11c1a596cc34caf0a7a3c821540240a\") " pod="kube-system/kube-apiserver-localhost" May 15 09:18:42.502801 kubelet[2641]: E0515 09:18:42.502685 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:42.504051 kubelet[2641]: E0515 09:18:42.504012 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:42.504184 kubelet[2641]: E0515 09:18:42.504162 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:42.621675 sudo[2677]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 09:18:42.621979 sudo[2677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 09:18:43.050391 sudo[2677]: pam_unix(sudo:session): session closed for user root May 15 09:18:43.062996 kubelet[2641]: I0515 09:18:43.062959 2641 apiserver.go:52] "Watching apiserver" May 15 09:18:43.074896 kubelet[2641]: I0515 09:18:43.074856 2641 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 09:18:43.112913 kubelet[2641]: E0515 09:18:43.112876 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:43.114305 kubelet[2641]: E0515 09:18:43.114113 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:43.119217 kubelet[2641]: E0515 09:18:43.119175 2641 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 09:18:43.119654 kubelet[2641]: E0515 09:18:43.119623 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:43.145750 kubelet[2641]: I0515 09:18:43.145677 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.137779841 podStartE2EDuration="3.137779841s" podCreationTimestamp="2025-05-15 09:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:18:43.136778411 +0000 UTC m=+1.135221881" watchObservedRunningTime="2025-05-15 09:18:43.137779841 +0000 UTC m=+1.136223271" May 15 09:18:43.199988 kubelet[2641]: I0515 09:18:43.199914 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.199896954 podStartE2EDuration="1.199896954s" podCreationTimestamp="2025-05-15 09:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:18:43.199612815 +0000 UTC m=+1.198056285" watchObservedRunningTime="2025-05-15 09:18:43.199896954 +0000 UTC m=+1.198340424" May 15 09:18:43.200460 kubelet[2641]: I0515 09:18:43.200423 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.200412318 podStartE2EDuration="1.200412318s" podCreationTimestamp="2025-05-15 09:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:18:43.179233316 +0000 UTC m=+1.177676786" watchObservedRunningTime="2025-05-15 09:18:43.200412318 +0000 UTC m=+1.198855788" May 15 09:18:44.117687 kubelet[2641]: E0515 09:18:44.115729 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:44.906356 sudo[1655]: pam_unix(sudo:session): session closed for user root May 15 09:18:44.907769 sshd[1654]: Connection closed by 10.0.0.1 port 34862 May 15 09:18:44.908485 sshd-session[1652]: pam_unix(sshd:session): session closed for user core May 15 09:18:44.913684 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:34862.service: Deactivated successfully. May 15 09:18:44.915472 systemd[1]: session-7.scope: Deactivated successfully. May 15 09:18:44.915676 systemd[1]: session-7.scope: Consumed 8.165s CPU time, 189.9M memory peak, 0B memory swap peak. May 15 09:18:44.916137 systemd-logind[1458]: Session 7 logged out. Waiting for processes to exit. May 15 09:18:44.916976 systemd-logind[1458]: Removed session 7. May 15 09:18:47.331781 kubelet[2641]: E0515 09:18:47.331739 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:47.375368 kubelet[2641]: E0515 09:18:47.375328 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:48.121451 kubelet[2641]: E0515 09:18:48.121104 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:48.121451 kubelet[2641]: E0515 09:18:48.121235 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:48.771707 kubelet[2641]: E0515 09:18:48.771674 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:49.122835 kubelet[2641]: E0515 09:18:49.122726 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:53.087895 update_engine[1462]: I20250515 09:18:53.087813 1462 update_attempter.cc:509] Updating boot flags... May 15 09:18:53.113172 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2725) May 15 09:18:53.157543 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2728) May 15 09:18:58.660093 kubelet[2641]: I0515 09:18:58.660018 2641 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 09:18:58.661999 kubelet[2641]: I0515 09:18:58.660901 2641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 09:18:58.662044 containerd[1474]: time="2025-05-15T09:18:58.660740132Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 09:18:58.665210 kubelet[2641]: I0515 09:18:58.662908 2641 topology_manager.go:215] "Topology Admit Handler" podUID="a5ddc493-af04-40b3-89b3-eab46c3e80c9" podNamespace="kube-system" podName="kube-proxy-2mctl" May 15 09:18:58.669071 kubelet[2641]: I0515 09:18:58.669042 2641 topology_manager.go:215] "Topology Admit Handler" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" podNamespace="kube-system" podName="cilium-q8wsm" May 15 09:18:58.673253 systemd[1]: Created slice kubepods-besteffort-poda5ddc493_af04_40b3_89b3_eab46c3e80c9.slice - libcontainer container kubepods-besteffort-poda5ddc493_af04_40b3_89b3_eab46c3e80c9.slice. May 15 09:18:58.688301 kubelet[2641]: I0515 09:18:58.688268 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-kernel\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688431 kubelet[2641]: I0515 09:18:58.688309 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-run\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688431 kubelet[2641]: I0515 09:18:58.688330 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-xtables-lock\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688431 kubelet[2641]: I0515 09:18:58.688345 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-cgroup\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688431 kubelet[2641]: I0515 09:18:58.688360 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-etc-cni-netd\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688431 kubelet[2641]: I0515 09:18:58.688377 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zgmh\" (UniqueName: \"kubernetes.io/projected/a5ddc493-af04-40b3-89b3-eab46c3e80c9-kube-api-access-6zgmh\") pod \"kube-proxy-2mctl\" (UID: \"a5ddc493-af04-40b3-89b3-eab46c3e80c9\") " pod="kube-system/kube-proxy-2mctl" May 15 09:18:58.688431 kubelet[2641]: I0515 09:18:58.688395 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-bpf-maps\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688579 kubelet[2641]: I0515 09:18:58.688414 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cni-path\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688579 kubelet[2641]: I0515 09:18:58.688429 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hubble-tls\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688579 kubelet[2641]: I0515 09:18:58.688444 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-lib-modules\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688579 kubelet[2641]: I0515 09:18:58.688458 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-net\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688579 kubelet[2641]: I0515 09:18:58.688473 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-config-path\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688579 kubelet[2641]: I0515 09:18:58.688495 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5ddc493-af04-40b3-89b3-eab46c3e80c9-xtables-lock\") pod \"kube-proxy-2mctl\" (UID: \"a5ddc493-af04-40b3-89b3-eab46c3e80c9\") " pod="kube-system/kube-proxy-2mctl" May 15 09:18:58.688701 kubelet[2641]: I0515 09:18:58.688510 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-clustermesh-secrets\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688701 kubelet[2641]: I0515 09:18:58.688533 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4fzx\" (UniqueName: \"kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-kube-api-access-k4fzx\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.688701 kubelet[2641]: I0515 09:18:58.688552 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a5ddc493-af04-40b3-89b3-eab46c3e80c9-kube-proxy\") pod \"kube-proxy-2mctl\" (UID: \"a5ddc493-af04-40b3-89b3-eab46c3e80c9\") " pod="kube-system/kube-proxy-2mctl" May 15 09:18:58.688701 kubelet[2641]: I0515 09:18:58.688570 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ddc493-af04-40b3-89b3-eab46c3e80c9-lib-modules\") pod \"kube-proxy-2mctl\" (UID: \"a5ddc493-af04-40b3-89b3-eab46c3e80c9\") " pod="kube-system/kube-proxy-2mctl" May 15 09:18:58.688701 kubelet[2641]: I0515 09:18:58.688586 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hostproc\") pod \"cilium-q8wsm\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " pod="kube-system/cilium-q8wsm" May 15 09:18:58.695594 systemd[1]: Created slice kubepods-burstable-podd8c67146_bd2a_4e67_83a0_8fd17ec6b893.slice - libcontainer container kubepods-burstable-podd8c67146_bd2a_4e67_83a0_8fd17ec6b893.slice. May 15 09:18:58.815663 kubelet[2641]: I0515 09:18:58.815395 2641 topology_manager.go:215] "Topology Admit Handler" podUID="336a67c6-4175-4591-99e6-871cc8bc601d" podNamespace="kube-system" podName="cilium-operator-599987898-m2hsn" May 15 09:18:58.831785 systemd[1]: Created slice kubepods-besteffort-pod336a67c6_4175_4591_99e6_871cc8bc601d.slice - libcontainer container kubepods-besteffort-pod336a67c6_4175_4591_99e6_871cc8bc601d.slice. May 15 09:18:58.896323 kubelet[2641]: I0515 09:18:58.896279 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fzr\" (UniqueName: \"kubernetes.io/projected/336a67c6-4175-4591-99e6-871cc8bc601d-kube-api-access-96fzr\") pod \"cilium-operator-599987898-m2hsn\" (UID: \"336a67c6-4175-4591-99e6-871cc8bc601d\") " pod="kube-system/cilium-operator-599987898-m2hsn" May 15 09:18:58.896323 kubelet[2641]: I0515 09:18:58.896326 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336a67c6-4175-4591-99e6-871cc8bc601d-cilium-config-path\") pod \"cilium-operator-599987898-m2hsn\" (UID: \"336a67c6-4175-4591-99e6-871cc8bc601d\") " pod="kube-system/cilium-operator-599987898-m2hsn" May 15 09:18:58.993762 kubelet[2641]: E0515 09:18:58.993394 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:58.999362 kubelet[2641]: E0515 09:18:58.999333 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:59.002798 containerd[1474]: time="2025-05-15T09:18:59.002756251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mctl,Uid:a5ddc493-af04-40b3-89b3-eab46c3e80c9,Namespace:kube-system,Attempt:0,}" May 15 09:18:59.003210 containerd[1474]: time="2025-05-15T09:18:59.002756931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8wsm,Uid:d8c67146-bd2a-4e67-83a0-8fd17ec6b893,Namespace:kube-system,Attempt:0,}" May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029370694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029418905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029433428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029193734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029255588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029269751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:59.029637 containerd[1474]: time="2025-05-15T09:18:59.029354851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:59.030234 containerd[1474]: time="2025-05-15T09:18:59.029954625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:59.054291 systemd[1]: Started cri-containerd-8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64.scope - libcontainer container 8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64. May 15 09:18:59.058020 systemd[1]: Started cri-containerd-93da5c0d1c79ffd2e4ad8916c0e08314a2214aab936776f5efee95218103b8e9.scope - libcontainer container 93da5c0d1c79ffd2e4ad8916c0e08314a2214aab936776f5efee95218103b8e9. May 15 09:18:59.080622 containerd[1474]: time="2025-05-15T09:18:59.080583169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q8wsm,Uid:d8c67146-bd2a-4e67-83a0-8fd17ec6b893,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\"" May 15 09:18:59.081997 kubelet[2641]: E0515 09:18:59.081976 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:59.085936 containerd[1474]: time="2025-05-15T09:18:59.085889958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2mctl,Uid:a5ddc493-af04-40b3-89b3-eab46c3e80c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"93da5c0d1c79ffd2e4ad8916c0e08314a2214aab936776f5efee95218103b8e9\"" May 15 09:18:59.086639 kubelet[2641]: E0515 09:18:59.086610 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:59.092693 containerd[1474]: time="2025-05-15T09:18:59.092662436Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 09:18:59.092932 containerd[1474]: time="2025-05-15T09:18:59.092907891Z" level=info msg="CreateContainer within sandbox \"93da5c0d1c79ffd2e4ad8916c0e08314a2214aab936776f5efee95218103b8e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 09:18:59.120594 containerd[1474]: time="2025-05-15T09:18:59.120544523Z" level=info msg="CreateContainer within sandbox \"93da5c0d1c79ffd2e4ad8916c0e08314a2214aab936776f5efee95218103b8e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8077cd0525546bde21b850d10cf9616cfb12de2c41298a85fd8224ac3c22019\"" May 15 09:18:59.121435 containerd[1474]: time="2025-05-15T09:18:59.121394113Z" level=info msg="StartContainer for \"d8077cd0525546bde21b850d10cf9616cfb12de2c41298a85fd8224ac3c22019\"" May 15 09:18:59.135721 kubelet[2641]: E0515 09:18:59.134736 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:18:59.135929 containerd[1474]: time="2025-05-15T09:18:59.135879959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-m2hsn,Uid:336a67c6-4175-4591-99e6-871cc8bc601d,Namespace:kube-system,Attempt:0,}" May 15 09:18:59.158451 containerd[1474]: time="2025-05-15T09:18:59.158381161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:18:59.158606 containerd[1474]: time="2025-05-15T09:18:59.158429292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:18:59.158606 containerd[1474]: time="2025-05-15T09:18:59.158440094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:59.158606 containerd[1474]: time="2025-05-15T09:18:59.158500508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:18:59.161297 systemd[1]: Started cri-containerd-d8077cd0525546bde21b850d10cf9616cfb12de2c41298a85fd8224ac3c22019.scope - libcontainer container d8077cd0525546bde21b850d10cf9616cfb12de2c41298a85fd8224ac3c22019. May 15 09:18:59.182305 systemd[1]: Started cri-containerd-8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86.scope - libcontainer container 8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86. May 15 09:18:59.201607 containerd[1474]: time="2025-05-15T09:18:59.201553115Z" level=info msg="StartContainer for \"d8077cd0525546bde21b850d10cf9616cfb12de2c41298a85fd8224ac3c22019\" returns successfully" May 15 09:18:59.221301 containerd[1474]: time="2025-05-15T09:18:59.220811470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-m2hsn,Uid:336a67c6-4175-4591-99e6-871cc8bc601d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86\"" May 15 09:18:59.222609 kubelet[2641]: E0515 09:18:59.222586 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:00.156786 kubelet[2641]: E0515 09:19:00.156511 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:01.159403 kubelet[2641]: E0515 09:19:01.159359 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:07.610664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3144053385.mount: Deactivated successfully. May 15 09:19:08.393183 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:58862.service - OpenSSH per-connection server daemon (10.0.0.1:58862). May 15 09:19:08.455859 sshd[3048]: Accepted publickey for core from 10.0.0.1 port 58862 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:08.457334 sshd-session[3048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:08.463128 systemd-logind[1458]: New session 8 of user core. May 15 09:19:08.465524 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 09:19:08.610684 sshd[3050]: Connection closed by 10.0.0.1 port 58862 May 15 09:19:08.610981 sshd-session[3048]: pam_unix(sshd:session): session closed for user core May 15 09:19:08.615125 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:58862.service: Deactivated successfully. May 15 09:19:08.616694 systemd[1]: session-8.scope: Deactivated successfully. May 15 09:19:08.617748 systemd-logind[1458]: Session 8 logged out. Waiting for processes to exit. May 15 09:19:08.618949 systemd-logind[1458]: Removed session 8. May 15 09:19:09.011382 containerd[1474]: time="2025-05-15T09:19:09.011323756Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:19:09.012135 containerd[1474]: time="2025-05-15T09:19:09.012090766Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 09:19:09.012684 containerd[1474]: time="2025-05-15T09:19:09.012650432Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:19:09.014233 containerd[1474]: time="2025-05-15T09:19:09.014197654Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.921377624s" May 15 09:19:09.014273 containerd[1474]: time="2025-05-15T09:19:09.014236738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 09:19:09.020172 containerd[1474]: time="2025-05-15T09:19:09.018888445Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 09:19:09.022423 containerd[1474]: time="2025-05-15T09:19:09.022115024Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:19:09.043840 containerd[1474]: time="2025-05-15T09:19:09.043791532Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\"" May 15 09:19:09.044339 containerd[1474]: time="2025-05-15T09:19:09.044283470Z" level=info msg="StartContainer for \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\"" May 15 09:19:09.067311 systemd[1]: Started cri-containerd-3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d.scope - libcontainer container 3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d. May 15 09:19:09.091402 containerd[1474]: time="2025-05-15T09:19:09.091195223Z" level=info msg="StartContainer for \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\" returns successfully" May 15 09:19:09.143572 systemd[1]: cri-containerd-3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d.scope: Deactivated successfully. May 15 09:19:09.171746 kubelet[2641]: E0515 09:19:09.171600 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:09.200374 kubelet[2641]: I0515 09:19:09.197275 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mctl" podStartSLOduration=11.197254648 podStartE2EDuration="11.197254648s" podCreationTimestamp="2025-05-15 09:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:19:00.173435789 +0000 UTC m=+18.171879259" watchObservedRunningTime="2025-05-15 09:19:09.197254648 +0000 UTC m=+27.195698118" May 15 09:19:09.325629 containerd[1474]: time="2025-05-15T09:19:09.320985070Z" level=info msg="shim disconnected" id=3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d namespace=k8s.io May 15 09:19:09.325629 containerd[1474]: time="2025-05-15T09:19:09.325528804Z" level=warning msg="cleaning up after shim disconnected" id=3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d namespace=k8s.io May 15 09:19:09.325629 containerd[1474]: time="2025-05-15T09:19:09.325543765Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:10.039221 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d-rootfs.mount: Deactivated successfully. May 15 09:19:10.175175 kubelet[2641]: E0515 09:19:10.174291 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:10.177238 containerd[1474]: time="2025-05-15T09:19:10.177192886Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:19:10.193280 containerd[1474]: time="2025-05-15T09:19:10.193222932Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\"" May 15 09:19:10.194128 containerd[1474]: time="2025-05-15T09:19:10.194101989Z" level=info msg="StartContainer for \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\"" May 15 09:19:10.222300 systemd[1]: Started cri-containerd-55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5.scope - libcontainer container 55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5. May 15 09:19:10.244461 containerd[1474]: time="2025-05-15T09:19:10.244397690Z" level=info msg="StartContainer for \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\" returns successfully" May 15 09:19:10.266265 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 09:19:10.266570 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 09:19:10.266716 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 09:19:10.274538 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 09:19:10.274821 systemd[1]: cri-containerd-55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5.scope: Deactivated successfully. May 15 09:19:10.343090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 09:19:10.406605 containerd[1474]: time="2025-05-15T09:19:10.406539196Z" level=info msg="shim disconnected" id=55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5 namespace=k8s.io May 15 09:19:10.406986 containerd[1474]: time="2025-05-15T09:19:10.406780182Z" level=warning msg="cleaning up after shim disconnected" id=55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5 namespace=k8s.io May 15 09:19:10.406986 containerd[1474]: time="2025-05-15T09:19:10.406796424Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:10.715188 containerd[1474]: time="2025-05-15T09:19:10.714967579Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:19:10.715490 containerd[1474]: time="2025-05-15T09:19:10.715451512Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 09:19:10.716231 containerd[1474]: time="2025-05-15T09:19:10.716200035Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 09:19:10.718135 containerd[1474]: time="2025-05-15T09:19:10.717792690Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.69886232s" May 15 09:19:10.718135 containerd[1474]: time="2025-05-15T09:19:10.717829414Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 09:19:10.720004 containerd[1474]: time="2025-05-15T09:19:10.719974291Z" level=info msg="CreateContainer within sandbox \"8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 09:19:10.738267 containerd[1474]: time="2025-05-15T09:19:10.738194578Z" level=info msg="CreateContainer within sandbox \"8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\"" May 15 09:19:10.738711 containerd[1474]: time="2025-05-15T09:19:10.738676071Z" level=info msg="StartContainer for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\"" May 15 09:19:10.765328 systemd[1]: Started cri-containerd-5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc.scope - libcontainer container 5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc. May 15 09:19:10.794238 containerd[1474]: time="2025-05-15T09:19:10.792241093Z" level=info msg="StartContainer for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" returns successfully" May 15 09:19:11.040237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5-rootfs.mount: Deactivated successfully. May 15 09:19:11.178584 kubelet[2641]: E0515 09:19:11.178543 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:11.183127 kubelet[2641]: E0515 09:19:11.183101 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:11.186570 containerd[1474]: time="2025-05-15T09:19:11.186518779Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:19:11.194949 kubelet[2641]: I0515 09:19:11.194845 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-m2hsn" podStartSLOduration=1.701589348 podStartE2EDuration="13.194828598s" podCreationTimestamp="2025-05-15 09:18:58 +0000 UTC" firstStartedPulling="2025-05-15 09:18:59.22536417 +0000 UTC m=+17.223807640" lastFinishedPulling="2025-05-15 09:19:10.71860342 +0000 UTC m=+28.717046890" observedRunningTime="2025-05-15 09:19:11.19417721 +0000 UTC m=+29.192620640" watchObservedRunningTime="2025-05-15 09:19:11.194828598 +0000 UTC m=+29.193272068" May 15 09:19:11.208066 containerd[1474]: time="2025-05-15T09:19:11.207937112Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\"" May 15 09:19:11.210178 containerd[1474]: time="2025-05-15T09:19:11.209384421Z" level=info msg="StartContainer for \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\"" May 15 09:19:11.263323 systemd[1]: Started cri-containerd-4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86.scope - libcontainer container 4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86. May 15 09:19:11.303281 containerd[1474]: time="2025-05-15T09:19:11.303162308Z" level=info msg="StartContainer for \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\" returns successfully" May 15 09:19:11.323511 systemd[1]: cri-containerd-4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86.scope: Deactivated successfully. May 15 09:19:11.416864 containerd[1474]: time="2025-05-15T09:19:11.416789965Z" level=info msg="shim disconnected" id=4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86 namespace=k8s.io May 15 09:19:11.416864 containerd[1474]: time="2025-05-15T09:19:11.416847291Z" level=warning msg="cleaning up after shim disconnected" id=4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86 namespace=k8s.io May 15 09:19:11.416864 containerd[1474]: time="2025-05-15T09:19:11.416857132Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:12.039462 systemd[1]: run-containerd-runc-k8s.io-4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86-runc.2b4fMQ.mount: Deactivated successfully. May 15 09:19:12.039568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86-rootfs.mount: Deactivated successfully. May 15 09:19:12.189167 kubelet[2641]: E0515 09:19:12.189123 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:12.190449 kubelet[2641]: E0515 09:19:12.189350 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:12.192360 containerd[1474]: time="2025-05-15T09:19:12.192304759Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:19:12.245563 containerd[1474]: time="2025-05-15T09:19:12.245523437Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\"" May 15 09:19:12.247258 containerd[1474]: time="2025-05-15T09:19:12.246048411Z" level=info msg="StartContainer for \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\"" May 15 09:19:12.272348 systemd[1]: Started cri-containerd-95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f.scope - libcontainer container 95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f. May 15 09:19:12.291781 systemd[1]: cri-containerd-95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f.scope: Deactivated successfully. May 15 09:19:12.295388 containerd[1474]: time="2025-05-15T09:19:12.295275333Z" level=info msg="StartContainer for \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\" returns successfully" May 15 09:19:12.318431 containerd[1474]: time="2025-05-15T09:19:12.318135712Z" level=info msg="shim disconnected" id=95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f namespace=k8s.io May 15 09:19:12.318431 containerd[1474]: time="2025-05-15T09:19:12.318308810Z" level=warning msg="cleaning up after shim disconnected" id=95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f namespace=k8s.io May 15 09:19:12.318431 containerd[1474]: time="2025-05-15T09:19:12.318318491Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:13.039469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f-rootfs.mount: Deactivated successfully. May 15 09:19:13.193675 kubelet[2641]: E0515 09:19:13.193643 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:13.197411 containerd[1474]: time="2025-05-15T09:19:13.197318783Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:19:13.215395 containerd[1474]: time="2025-05-15T09:19:13.215273027Z" level=info msg="CreateContainer within sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\"" May 15 09:19:13.215887 containerd[1474]: time="2025-05-15T09:19:13.215860065Z" level=info msg="StartContainer for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\"" May 15 09:19:13.244297 systemd[1]: Started cri-containerd-0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd.scope - libcontainer container 0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd. May 15 09:19:13.269799 containerd[1474]: time="2025-05-15T09:19:13.269713516Z" level=info msg="StartContainer for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" returns successfully" May 15 09:19:13.436005 kubelet[2641]: I0515 09:19:13.435866 2641 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 09:19:13.511191 kubelet[2641]: I0515 09:19:13.510090 2641 topology_manager.go:215] "Topology Admit Handler" podUID="fac5045a-d214-4b77-b9ab-b1396ec17ed9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4rnws" May 15 09:19:13.517866 kubelet[2641]: I0515 09:19:13.517232 2641 topology_manager.go:215] "Topology Admit Handler" podUID="1169e957-3bef-429d-802a-6a8af41c7ce3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6q856" May 15 09:19:13.527527 systemd[1]: Created slice kubepods-burstable-podfac5045a_d214_4b77_b9ab_b1396ec17ed9.slice - libcontainer container kubepods-burstable-podfac5045a_d214_4b77_b9ab_b1396ec17ed9.slice. May 15 09:19:13.535863 systemd[1]: Created slice kubepods-burstable-pod1169e957_3bef_429d_802a_6a8af41c7ce3.slice - libcontainer container kubepods-burstable-pod1169e957_3bef_429d_802a_6a8af41c7ce3.slice. May 15 09:19:13.626844 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:53268.service - OpenSSH per-connection server daemon (10.0.0.1:53268). May 15 09:19:13.695322 kubelet[2641]: I0515 09:19:13.695213 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1169e957-3bef-429d-802a-6a8af41c7ce3-config-volume\") pod \"coredns-7db6d8ff4d-6q856\" (UID: \"1169e957-3bef-429d-802a-6a8af41c7ce3\") " pod="kube-system/coredns-7db6d8ff4d-6q856" May 15 09:19:13.695322 kubelet[2641]: I0515 09:19:13.695263 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fac5045a-d214-4b77-b9ab-b1396ec17ed9-config-volume\") pod \"coredns-7db6d8ff4d-4rnws\" (UID: \"fac5045a-d214-4b77-b9ab-b1396ec17ed9\") " pod="kube-system/coredns-7db6d8ff4d-4rnws" May 15 09:19:13.695322 kubelet[2641]: I0515 09:19:13.695286 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpnpb\" (UniqueName: \"kubernetes.io/projected/1169e957-3bef-429d-802a-6a8af41c7ce3-kube-api-access-bpnpb\") pod \"coredns-7db6d8ff4d-6q856\" (UID: \"1169e957-3bef-429d-802a-6a8af41c7ce3\") " pod="kube-system/coredns-7db6d8ff4d-6q856" May 15 09:19:13.695322 kubelet[2641]: I0515 09:19:13.695304 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgwp8\" (UniqueName: \"kubernetes.io/projected/fac5045a-d214-4b77-b9ab-b1396ec17ed9-kube-api-access-lgwp8\") pod \"coredns-7db6d8ff4d-4rnws\" (UID: \"fac5045a-d214-4b77-b9ab-b1396ec17ed9\") " pod="kube-system/coredns-7db6d8ff4d-4rnws" May 15 09:19:13.707898 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 53268 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:13.708835 sshd-session[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:13.714003 systemd-logind[1458]: New session 9 of user core. May 15 09:19:13.724335 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 09:19:13.832312 kubelet[2641]: E0515 09:19:13.832245 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:13.833243 containerd[1474]: time="2025-05-15T09:19:13.833204082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4rnws,Uid:fac5045a-d214-4b77-b9ab-b1396ec17ed9,Namespace:kube-system,Attempt:0,}" May 15 09:19:13.840653 kubelet[2641]: E0515 09:19:13.840612 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:13.841532 containerd[1474]: time="2025-05-15T09:19:13.841491819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6q856,Uid:1169e957-3bef-429d-802a-6a8af41c7ce3,Namespace:kube-system,Attempt:0,}" May 15 09:19:13.866677 sshd[3453]: Connection closed by 10.0.0.1 port 53268 May 15 09:19:13.868066 sshd-session[3444]: pam_unix(sshd:session): session closed for user core May 15 09:19:13.875598 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:53268.service: Deactivated successfully. May 15 09:19:13.880469 systemd[1]: session-9.scope: Deactivated successfully. May 15 09:19:13.885628 systemd-logind[1458]: Session 9 logged out. Waiting for processes to exit. May 15 09:19:13.887134 systemd-logind[1458]: Removed session 9. May 15 09:19:14.207501 kubelet[2641]: E0515 09:19:14.207427 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:14.230480 kubelet[2641]: I0515 09:19:14.230189 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q8wsm" podStartSLOduration=6.30467123 podStartE2EDuration="16.230171842s" podCreationTimestamp="2025-05-15 09:18:58 +0000 UTC" firstStartedPulling="2025-05-15 09:18:59.091085002 +0000 UTC m=+17.089528472" lastFinishedPulling="2025-05-15 09:19:09.016585614 +0000 UTC m=+27.015029084" observedRunningTime="2025-05-15 09:19:14.229899465 +0000 UTC m=+32.228342935" watchObservedRunningTime="2025-05-15 09:19:14.230171842 +0000 UTC m=+32.228615312" May 15 09:19:15.211380 kubelet[2641]: E0515 09:19:15.211337 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:15.441365 systemd-networkd[1404]: cilium_host: Link UP May 15 09:19:15.442177 systemd-networkd[1404]: cilium_net: Link UP May 15 09:19:15.442660 systemd-networkd[1404]: cilium_net: Gained carrier May 15 09:19:15.443278 systemd-networkd[1404]: cilium_host: Gained carrier May 15 09:19:15.523835 systemd-networkd[1404]: cilium_vxlan: Link UP May 15 09:19:15.523843 systemd-networkd[1404]: cilium_vxlan: Gained carrier May 15 09:19:15.842342 kernel: NET: Registered PF_ALG protocol family May 15 09:19:15.908304 systemd-networkd[1404]: cilium_net: Gained IPv6LL May 15 09:19:16.213421 kubelet[2641]: E0515 09:19:16.213365 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:16.276280 systemd-networkd[1404]: cilium_host: Gained IPv6LL May 15 09:19:16.427010 systemd-networkd[1404]: lxc_health: Link UP May 15 09:19:16.434459 systemd-networkd[1404]: lxc_health: Gained carrier May 15 09:19:16.596265 systemd-networkd[1404]: cilium_vxlan: Gained IPv6LL May 15 09:19:17.014220 systemd-networkd[1404]: lxcf81331fcb1c2: Link UP May 15 09:19:17.023250 kernel: eth0: renamed from tmp7b2fa May 15 09:19:17.031618 systemd-networkd[1404]: lxcf81331fcb1c2: Gained carrier May 15 09:19:17.036655 systemd-networkd[1404]: lxc484dafb6b6a9: Link UP May 15 09:19:17.046185 kernel: eth0: renamed from tmpa0814 May 15 09:19:17.051769 systemd-networkd[1404]: lxc484dafb6b6a9: Gained carrier May 15 09:19:17.215833 kubelet[2641]: E0515 09:19:17.215797 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:17.620303 systemd-networkd[1404]: lxc_health: Gained IPv6LL May 15 09:19:18.217966 kubelet[2641]: E0515 09:19:18.217757 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:18.324275 systemd-networkd[1404]: lxc484dafb6b6a9: Gained IPv6LL May 15 09:19:18.453258 systemd-networkd[1404]: lxcf81331fcb1c2: Gained IPv6LL May 15 09:19:18.882035 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:53270.service - OpenSSH per-connection server daemon (10.0.0.1:53270). May 15 09:19:18.934795 sshd[3910]: Accepted publickey for core from 10.0.0.1 port 53270 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:18.936299 sshd-session[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:18.940427 systemd-logind[1458]: New session 10 of user core. May 15 09:19:18.952421 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 09:19:19.089814 sshd[3912]: Connection closed by 10.0.0.1 port 53270 May 15 09:19:19.090411 sshd-session[3910]: pam_unix(sshd:session): session closed for user core May 15 09:19:19.095353 systemd-logind[1458]: Session 10 logged out. Waiting for processes to exit. May 15 09:19:19.096451 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:53270.service: Deactivated successfully. May 15 09:19:19.100024 systemd[1]: session-10.scope: Deactivated successfully. May 15 09:19:19.103427 systemd-logind[1458]: Removed session 10. May 15 09:19:19.219844 kubelet[2641]: E0515 09:19:19.219640 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:20.699351 containerd[1474]: time="2025-05-15T09:19:20.699252828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:19:20.699351 containerd[1474]: time="2025-05-15T09:19:20.699323632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:19:20.699351 containerd[1474]: time="2025-05-15T09:19:20.699339033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:19:20.700285 containerd[1474]: time="2025-05-15T09:19:20.699435078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:19:20.708612 containerd[1474]: time="2025-05-15T09:19:20.708446198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:19:20.708612 containerd[1474]: time="2025-05-15T09:19:20.708512762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:19:20.708612 containerd[1474]: time="2025-05-15T09:19:20.708529683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:19:20.708786 containerd[1474]: time="2025-05-15T09:19:20.708610487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:19:20.725352 systemd[1]: Started cri-containerd-7b2fa718b3b0a3c6eb39ded54a8cd0d072e6df2c271e7a258244a6ff3ce67d42.scope - libcontainer container 7b2fa718b3b0a3c6eb39ded54a8cd0d072e6df2c271e7a258244a6ff3ce67d42. May 15 09:19:20.729754 systemd[1]: Started cri-containerd-a081417dc1551dad441e6079b4c28ff23ef17d4f635228b97d7644481d8ffb3e.scope - libcontainer container a081417dc1551dad441e6079b4c28ff23ef17d4f635228b97d7644481d8ffb3e. May 15 09:19:20.741910 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 09:19:20.742032 systemd-resolved[1316]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 09:19:20.765752 containerd[1474]: time="2025-05-15T09:19:20.765676131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4rnws,Uid:fac5045a-d214-4b77-b9ab-b1396ec17ed9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b2fa718b3b0a3c6eb39ded54a8cd0d072e6df2c271e7a258244a6ff3ce67d42\"" May 15 09:19:20.765932 containerd[1474]: time="2025-05-15T09:19:20.765820219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6q856,Uid:1169e957-3bef-429d-802a-6a8af41c7ce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a081417dc1551dad441e6079b4c28ff23ef17d4f635228b97d7644481d8ffb3e\"" May 15 09:19:20.766560 kubelet[2641]: E0515 09:19:20.766527 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:20.767925 kubelet[2641]: E0515 09:19:20.767201 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:20.769221 containerd[1474]: time="2025-05-15T09:19:20.769069432Z" level=info msg="CreateContainer within sandbox \"7b2fa718b3b0a3c6eb39ded54a8cd0d072e6df2c271e7a258244a6ff3ce67d42\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 09:19:20.772774 containerd[1474]: time="2025-05-15T09:19:20.772724747Z" level=info msg="CreateContainer within sandbox \"a081417dc1551dad441e6079b4c28ff23ef17d4f635228b97d7644481d8ffb3e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 09:19:20.862990 containerd[1474]: time="2025-05-15T09:19:20.862931198Z" level=info msg="CreateContainer within sandbox \"7b2fa718b3b0a3c6eb39ded54a8cd0d072e6df2c271e7a258244a6ff3ce67d42\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0129052a1959d31b7cb76ef11a6bbc0607cc24107f559540f02b86d47b7a868\"" May 15 09:19:20.866075 containerd[1474]: time="2025-05-15T09:19:20.865209600Z" level=info msg="StartContainer for \"b0129052a1959d31b7cb76ef11a6bbc0607cc24107f559540f02b86d47b7a868\"" May 15 09:19:20.869860 containerd[1474]: time="2025-05-15T09:19:20.869814326Z" level=info msg="CreateContainer within sandbox \"a081417dc1551dad441e6079b4c28ff23ef17d4f635228b97d7644481d8ffb3e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8c0ba0f33044800706848d08f461988a3f92535f7bb75cad5fdd8967c90c60b8\"" May 15 09:19:20.870745 containerd[1474]: time="2025-05-15T09:19:20.870709813Z" level=info msg="StartContainer for \"8c0ba0f33044800706848d08f461988a3f92535f7bb75cad5fdd8967c90c60b8\"" May 15 09:19:20.897345 systemd[1]: Started cri-containerd-b0129052a1959d31b7cb76ef11a6bbc0607cc24107f559540f02b86d47b7a868.scope - libcontainer container b0129052a1959d31b7cb76ef11a6bbc0607cc24107f559540f02b86d47b7a868. May 15 09:19:20.900861 systemd[1]: Started cri-containerd-8c0ba0f33044800706848d08f461988a3f92535f7bb75cad5fdd8967c90c60b8.scope - libcontainer container 8c0ba0f33044800706848d08f461988a3f92535f7bb75cad5fdd8967c90c60b8. May 15 09:19:20.923495 containerd[1474]: time="2025-05-15T09:19:20.923454467Z" level=info msg="StartContainer for \"b0129052a1959d31b7cb76ef11a6bbc0607cc24107f559540f02b86d47b7a868\" returns successfully" May 15 09:19:20.936813 containerd[1474]: time="2025-05-15T09:19:20.936764217Z" level=info msg="StartContainer for \"8c0ba0f33044800706848d08f461988a3f92535f7bb75cad5fdd8967c90c60b8\" returns successfully" May 15 09:19:21.224871 kubelet[2641]: E0515 09:19:21.224791 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:21.236086 kubelet[2641]: I0515 09:19:21.235414 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6q856" podStartSLOduration=23.235395368 podStartE2EDuration="23.235395368s" podCreationTimestamp="2025-05-15 09:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:19:21.234401876 +0000 UTC m=+39.232845386" watchObservedRunningTime="2025-05-15 09:19:21.235395368 +0000 UTC m=+39.233838838" May 15 09:19:21.236876 kubelet[2641]: E0515 09:19:21.236847 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:21.248470 kubelet[2641]: I0515 09:19:21.247709 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4rnws" podStartSLOduration=23.247689006 podStartE2EDuration="23.247689006s" podCreationTimestamp="2025-05-15 09:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:19:21.247236782 +0000 UTC m=+39.245680252" watchObservedRunningTime="2025-05-15 09:19:21.247689006 +0000 UTC m=+39.246132476" May 15 09:19:22.228032 kubelet[2641]: E0515 09:19:22.227996 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:23.842838 kubelet[2641]: E0515 09:19:23.842571 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:24.109445 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:40842.service - OpenSSH per-connection server daemon (10.0.0.1:40842). May 15 09:19:24.164629 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 40842 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:24.165804 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:24.170135 systemd-logind[1458]: New session 11 of user core. May 15 09:19:24.179376 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 09:19:24.232220 kubelet[2641]: E0515 09:19:24.232189 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:24.303130 sshd[4106]: Connection closed by 10.0.0.1 port 40842 May 15 09:19:24.303604 sshd-session[4104]: pam_unix(sshd:session): session closed for user core May 15 09:19:24.316043 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:40842.service: Deactivated successfully. May 15 09:19:24.318647 systemd[1]: session-11.scope: Deactivated successfully. May 15 09:19:24.321177 systemd-logind[1458]: Session 11 logged out. Waiting for processes to exit. May 15 09:19:24.325656 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:40848.service - OpenSSH per-connection server daemon (10.0.0.1:40848). May 15 09:19:24.327030 systemd-logind[1458]: Removed session 11. May 15 09:19:24.363592 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 40848 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:24.364864 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:24.369345 systemd-logind[1458]: New session 12 of user core. May 15 09:19:24.379327 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 09:19:24.525815 sshd[4121]: Connection closed by 10.0.0.1 port 40848 May 15 09:19:24.526203 sshd-session[4119]: pam_unix(sshd:session): session closed for user core May 15 09:19:24.535498 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:40848.service: Deactivated successfully. May 15 09:19:24.538230 systemd[1]: session-12.scope: Deactivated successfully. May 15 09:19:24.540211 systemd-logind[1458]: Session 12 logged out. Waiting for processes to exit. May 15 09:19:24.549589 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:40854.service - OpenSSH per-connection server daemon (10.0.0.1:40854). May 15 09:19:24.554174 systemd-logind[1458]: Removed session 12. May 15 09:19:24.592732 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 40854 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:24.594087 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:24.598214 systemd-logind[1458]: New session 13 of user core. May 15 09:19:24.615349 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 09:19:24.731229 sshd[4133]: Connection closed by 10.0.0.1 port 40854 May 15 09:19:24.732391 sshd-session[4131]: pam_unix(sshd:session): session closed for user core May 15 09:19:24.735928 systemd-logind[1458]: Session 13 logged out. Waiting for processes to exit. May 15 09:19:24.736098 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:40854.service: Deactivated successfully. May 15 09:19:24.738059 systemd[1]: session-13.scope: Deactivated successfully. May 15 09:19:24.738974 systemd-logind[1458]: Removed session 13. May 15 09:19:29.743823 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:40870.service - OpenSSH per-connection server daemon (10.0.0.1:40870). May 15 09:19:29.782040 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 40870 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:29.783205 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:29.786645 systemd-logind[1458]: New session 14 of user core. May 15 09:19:29.794315 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 09:19:29.906228 sshd[4150]: Connection closed by 10.0.0.1 port 40870 May 15 09:19:29.906940 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 15 09:19:29.910127 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:40870.service: Deactivated successfully. May 15 09:19:29.912425 systemd[1]: session-14.scope: Deactivated successfully. May 15 09:19:29.913050 systemd-logind[1458]: Session 14 logged out. Waiting for processes to exit. May 15 09:19:29.914118 systemd-logind[1458]: Removed session 14. May 15 09:19:34.917621 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:53382.service - OpenSSH per-connection server daemon (10.0.0.1:53382). May 15 09:19:34.956768 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 53382 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:34.957956 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:34.962203 systemd-logind[1458]: New session 15 of user core. May 15 09:19:34.968300 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 09:19:35.078858 sshd[4164]: Connection closed by 10.0.0.1 port 53382 May 15 09:19:35.079222 sshd-session[4162]: pam_unix(sshd:session): session closed for user core May 15 09:19:35.089780 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:53382.service: Deactivated successfully. May 15 09:19:35.093367 systemd[1]: session-15.scope: Deactivated successfully. May 15 09:19:35.094586 systemd-logind[1458]: Session 15 logged out. Waiting for processes to exit. May 15 09:19:35.101427 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:53392.service - OpenSSH per-connection server daemon (10.0.0.1:53392). May 15 09:19:35.103201 systemd-logind[1458]: Removed session 15. May 15 09:19:35.138186 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 53392 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:35.139342 sshd-session[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:35.143939 systemd-logind[1458]: New session 16 of user core. May 15 09:19:35.158306 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 09:19:35.343665 sshd[4178]: Connection closed by 10.0.0.1 port 53392 May 15 09:19:35.344154 sshd-session[4176]: pam_unix(sshd:session): session closed for user core May 15 09:19:35.357115 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:53392.service: Deactivated successfully. May 15 09:19:35.358680 systemd[1]: session-16.scope: Deactivated successfully. May 15 09:19:35.360066 systemd-logind[1458]: Session 16 logged out. Waiting for processes to exit. May 15 09:19:35.361509 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:53394.service - OpenSSH per-connection server daemon (10.0.0.1:53394). May 15 09:19:35.363547 systemd-logind[1458]: Removed session 16. May 15 09:19:35.420074 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 53394 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:35.420886 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:35.424924 systemd-logind[1458]: New session 17 of user core. May 15 09:19:35.440301 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 09:19:36.678342 sshd[4191]: Connection closed by 10.0.0.1 port 53394 May 15 09:19:36.678833 sshd-session[4189]: pam_unix(sshd:session): session closed for user core May 15 09:19:36.687747 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:53394.service: Deactivated successfully. May 15 09:19:36.691173 systemd[1]: session-17.scope: Deactivated successfully. May 15 09:19:36.693765 systemd-logind[1458]: Session 17 logged out. Waiting for processes to exit. May 15 09:19:36.703507 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:53410.service - OpenSSH per-connection server daemon (10.0.0.1:53410). May 15 09:19:36.704440 systemd-logind[1458]: Removed session 17. May 15 09:19:36.738956 sshd[4212]: Accepted publickey for core from 10.0.0.1 port 53410 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:36.740182 sshd-session[4212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:36.744010 systemd-logind[1458]: New session 18 of user core. May 15 09:19:36.759322 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 09:19:36.973536 sshd[4214]: Connection closed by 10.0.0.1 port 53410 May 15 09:19:36.973930 sshd-session[4212]: pam_unix(sshd:session): session closed for user core May 15 09:19:36.983007 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:53410.service: Deactivated successfully. May 15 09:19:36.985396 systemd[1]: session-18.scope: Deactivated successfully. May 15 09:19:36.988388 systemd-logind[1458]: Session 18 logged out. Waiting for processes to exit. May 15 09:19:36.994692 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:53424.service - OpenSSH per-connection server daemon (10.0.0.1:53424). May 15 09:19:36.995837 systemd-logind[1458]: Removed session 18. May 15 09:19:37.031426 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 53424 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:37.033262 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:37.036915 systemd-logind[1458]: New session 19 of user core. May 15 09:19:37.046310 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 09:19:37.159758 sshd[4226]: Connection closed by 10.0.0.1 port 53424 May 15 09:19:37.160114 sshd-session[4224]: pam_unix(sshd:session): session closed for user core May 15 09:19:37.163289 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:53424.service: Deactivated successfully. May 15 09:19:37.165055 systemd[1]: session-19.scope: Deactivated successfully. May 15 09:19:37.165675 systemd-logind[1458]: Session 19 logged out. Waiting for processes to exit. May 15 09:19:37.166794 systemd-logind[1458]: Removed session 19. May 15 09:19:42.174311 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:53426.service - OpenSSH per-connection server daemon (10.0.0.1:53426). May 15 09:19:42.217155 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 53426 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:42.217715 sshd-session[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:42.221799 systemd-logind[1458]: New session 20 of user core. May 15 09:19:42.231729 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 09:19:42.351172 sshd[4247]: Connection closed by 10.0.0.1 port 53426 May 15 09:19:42.352399 sshd-session[4245]: pam_unix(sshd:session): session closed for user core May 15 09:19:42.358824 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:53426.service: Deactivated successfully. May 15 09:19:42.363427 systemd[1]: session-20.scope: Deactivated successfully. May 15 09:19:42.364602 systemd-logind[1458]: Session 20 logged out. Waiting for processes to exit. May 15 09:19:42.369482 systemd-logind[1458]: Removed session 20. May 15 09:19:47.362731 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:58196.service - OpenSSH per-connection server daemon (10.0.0.1:58196). May 15 09:19:47.401723 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 58196 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:47.402930 sshd-session[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:47.406473 systemd-logind[1458]: New session 21 of user core. May 15 09:19:47.416318 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 09:19:47.522068 sshd[4261]: Connection closed by 10.0.0.1 port 58196 May 15 09:19:47.522420 sshd-session[4259]: pam_unix(sshd:session): session closed for user core May 15 09:19:47.525590 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:58196.service: Deactivated successfully. May 15 09:19:47.527766 systemd[1]: session-21.scope: Deactivated successfully. May 15 09:19:47.528388 systemd-logind[1458]: Session 21 logged out. Waiting for processes to exit. May 15 09:19:47.529093 systemd-logind[1458]: Removed session 21. May 15 09:19:52.533163 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:41444.service - OpenSSH per-connection server daemon (10.0.0.1:41444). May 15 09:19:52.582613 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:52.583986 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:52.589309 systemd-logind[1458]: New session 22 of user core. May 15 09:19:52.599335 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 09:19:52.731230 sshd[4276]: Connection closed by 10.0.0.1 port 41444 May 15 09:19:52.732356 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 15 09:19:52.740880 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:41444.service: Deactivated successfully. May 15 09:19:52.744765 systemd[1]: session-22.scope: Deactivated successfully. May 15 09:19:52.746296 systemd-logind[1458]: Session 22 logged out. Waiting for processes to exit. May 15 09:19:52.761813 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:41456.service - OpenSSH per-connection server daemon (10.0.0.1:41456). May 15 09:19:52.762560 systemd-logind[1458]: Removed session 22. May 15 09:19:52.802527 sshd[4288]: Accepted publickey for core from 10.0.0.1 port 41456 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:52.802957 sshd-session[4288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:52.807852 systemd-logind[1458]: New session 23 of user core. May 15 09:19:52.817361 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 09:19:54.912369 containerd[1474]: time="2025-05-15T09:19:54.911160694Z" level=info msg="StopContainer for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" with timeout 30 (s)" May 15 09:19:54.913258 containerd[1474]: time="2025-05-15T09:19:54.912713410Z" level=info msg="Stop container \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" with signal terminated" May 15 09:19:54.924879 systemd[1]: cri-containerd-5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc.scope: Deactivated successfully. May 15 09:19:54.947715 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc-rootfs.mount: Deactivated successfully. May 15 09:19:54.955326 containerd[1474]: time="2025-05-15T09:19:54.955251943Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 09:19:54.959475 containerd[1474]: time="2025-05-15T09:19:54.959420919Z" level=info msg="shim disconnected" id=5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc namespace=k8s.io May 15 09:19:54.959475 containerd[1474]: time="2025-05-15T09:19:54.959465720Z" level=warning msg="cleaning up after shim disconnected" id=5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc namespace=k8s.io May 15 09:19:54.959475 containerd[1474]: time="2025-05-15T09:19:54.959475320Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:54.964328 containerd[1474]: time="2025-05-15T09:19:54.964292910Z" level=info msg="StopContainer for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" with timeout 2 (s)" May 15 09:19:54.964716 containerd[1474]: time="2025-05-15T09:19:54.964689559Z" level=info msg="Stop container \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" with signal terminated" May 15 09:19:54.970937 systemd-networkd[1404]: lxc_health: Link DOWN May 15 09:19:54.970950 systemd-networkd[1404]: lxc_health: Lost carrier May 15 09:19:55.006809 systemd[1]: cri-containerd-0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd.scope: Deactivated successfully. May 15 09:19:55.007322 systemd[1]: cri-containerd-0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd.scope: Consumed 6.559s CPU time. May 15 09:19:55.013957 containerd[1474]: time="2025-05-15T09:19:55.013900359Z" level=info msg="StopContainer for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" returns successfully" May 15 09:19:55.018165 containerd[1474]: time="2025-05-15T09:19:55.017903608Z" level=info msg="StopPodSandbox for \"8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86\"" May 15 09:19:55.022324 containerd[1474]: time="2025-05-15T09:19:55.022258306Z" level=info msg="Container to stop \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:19:55.023992 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86-shm.mount: Deactivated successfully. May 15 09:19:55.030625 systemd[1]: cri-containerd-8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86.scope: Deactivated successfully. May 15 09:19:55.040190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd-rootfs.mount: Deactivated successfully. May 15 09:19:55.047978 containerd[1474]: time="2025-05-15T09:19:55.047775757Z" level=info msg="shim disconnected" id=0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd namespace=k8s.io May 15 09:19:55.047978 containerd[1474]: time="2025-05-15T09:19:55.047833519Z" level=warning msg="cleaning up after shim disconnected" id=0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd namespace=k8s.io May 15 09:19:55.047978 containerd[1474]: time="2025-05-15T09:19:55.047842239Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:55.059692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86-rootfs.mount: Deactivated successfully. May 15 09:19:55.063658 containerd[1474]: time="2025-05-15T09:19:55.060747488Z" level=info msg="shim disconnected" id=8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86 namespace=k8s.io May 15 09:19:55.063658 containerd[1474]: time="2025-05-15T09:19:55.060801409Z" level=warning msg="cleaning up after shim disconnected" id=8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86 namespace=k8s.io May 15 09:19:55.063658 containerd[1474]: time="2025-05-15T09:19:55.060810249Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:55.077365 containerd[1474]: time="2025-05-15T09:19:55.077312979Z" level=info msg="TearDown network for sandbox \"8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86\" successfully" May 15 09:19:55.077365 containerd[1474]: time="2025-05-15T09:19:55.077349539Z" level=info msg="StopPodSandbox for \"8b6023160089a9273d97c9c5c98f6a4aa42724a2f77c4202a3b44cb105a12d86\" returns successfully" May 15 09:19:55.084603 containerd[1474]: time="2025-05-15T09:19:55.084546661Z" level=info msg="StopContainer for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" returns successfully" May 15 09:19:55.084938 containerd[1474]: time="2025-05-15T09:19:55.084896788Z" level=info msg="StopPodSandbox for \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\"" May 15 09:19:55.084938 containerd[1474]: time="2025-05-15T09:19:55.084935869Z" level=info msg="Container to stop \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:19:55.085017 containerd[1474]: time="2025-05-15T09:19:55.084948790Z" level=info msg="Container to stop \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:19:55.085017 containerd[1474]: time="2025-05-15T09:19:55.084957830Z" level=info msg="Container to stop \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:19:55.085017 containerd[1474]: time="2025-05-15T09:19:55.084967630Z" level=info msg="Container to stop \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:19:55.085017 containerd[1474]: time="2025-05-15T09:19:55.084975510Z" level=info msg="Container to stop \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 09:19:55.090611 systemd[1]: cri-containerd-8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64.scope: Deactivated successfully. May 15 09:19:55.099184 kubelet[2641]: E0515 09:19:55.097062 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:55.199240 containerd[1474]: time="2025-05-15T09:19:55.198191085Z" level=info msg="shim disconnected" id=8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64 namespace=k8s.io May 15 09:19:55.199240 containerd[1474]: time="2025-05-15T09:19:55.198246006Z" level=warning msg="cleaning up after shim disconnected" id=8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64 namespace=k8s.io May 15 09:19:55.199240 containerd[1474]: time="2025-05-15T09:19:55.198254446Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:19:55.211604 containerd[1474]: time="2025-05-15T09:19:55.211517063Z" level=info msg="TearDown network for sandbox \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" successfully" May 15 09:19:55.211604 containerd[1474]: time="2025-05-15T09:19:55.211590065Z" level=info msg="StopPodSandbox for \"8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64\" returns successfully" May 15 09:19:55.247628 kubelet[2641]: I0515 09:19:55.247587 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336a67c6-4175-4591-99e6-871cc8bc601d-cilium-config-path\") pod \"336a67c6-4175-4591-99e6-871cc8bc601d\" (UID: \"336a67c6-4175-4591-99e6-871cc8bc601d\") " May 15 09:19:55.247628 kubelet[2641]: I0515 09:19:55.247666 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96fzr\" (UniqueName: \"kubernetes.io/projected/336a67c6-4175-4591-99e6-871cc8bc601d-kube-api-access-96fzr\") pod \"336a67c6-4175-4591-99e6-871cc8bc601d\" (UID: \"336a67c6-4175-4591-99e6-871cc8bc601d\") " May 15 09:19:55.255456 kubelet[2641]: I0515 09:19:55.255404 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/336a67c6-4175-4591-99e6-871cc8bc601d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "336a67c6-4175-4591-99e6-871cc8bc601d" (UID: "336a67c6-4175-4591-99e6-871cc8bc601d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 09:19:55.256951 kubelet[2641]: I0515 09:19:55.256895 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/336a67c6-4175-4591-99e6-871cc8bc601d-kube-api-access-96fzr" (OuterVolumeSpecName: "kube-api-access-96fzr") pod "336a67c6-4175-4591-99e6-871cc8bc601d" (UID: "336a67c6-4175-4591-99e6-871cc8bc601d"). InnerVolumeSpecName "kube-api-access-96fzr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 09:19:55.287188 kubelet[2641]: I0515 09:19:55.287132 2641 scope.go:117] "RemoveContainer" containerID="0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd" May 15 09:19:55.288977 containerd[1474]: time="2025-05-15T09:19:55.288940877Z" level=info msg="RemoveContainer for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\"" May 15 09:19:55.293208 systemd[1]: Removed slice kubepods-besteffort-pod336a67c6_4175_4591_99e6_871cc8bc601d.slice - libcontainer container kubepods-besteffort-pod336a67c6_4175_4591_99e6_871cc8bc601d.slice. May 15 09:19:55.348282 kubelet[2641]: I0515 09:19:55.348243 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-run\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351613 kubelet[2641]: I0515 09:19:55.348794 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-lib-modules\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351613 kubelet[2641]: I0515 09:19:55.348830 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-config-path\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351613 kubelet[2641]: I0515 09:19:55.348856 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-clustermesh-secrets\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351613 kubelet[2641]: I0515 09:19:55.348880 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-kernel\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351613 kubelet[2641]: I0515 09:19:55.348905 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-etc-cni-netd\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351613 kubelet[2641]: I0515 09:19:55.348931 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hostproc\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351802 kubelet[2641]: I0515 09:19:55.348950 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-xtables-lock\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351802 kubelet[2641]: I0515 09:19:55.348966 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hubble-tls\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351802 kubelet[2641]: I0515 09:19:55.348981 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-bpf-maps\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351802 kubelet[2641]: I0515 09:19:55.348997 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4fzx\" (UniqueName: \"kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-kube-api-access-k4fzx\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351802 kubelet[2641]: I0515 09:19:55.349012 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-cgroup\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351802 kubelet[2641]: I0515 09:19:55.349025 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-net\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351943 kubelet[2641]: I0515 09:19:55.349040 2641 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cni-path\") pod \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\" (UID: \"d8c67146-bd2a-4e67-83a0-8fd17ec6b893\") " May 15 09:19:55.351943 kubelet[2641]: I0515 09:19:55.349079 2641 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/336a67c6-4175-4591-99e6-871cc8bc601d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.351943 kubelet[2641]: I0515 09:19:55.349089 2641 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-96fzr\" (UniqueName: \"kubernetes.io/projected/336a67c6-4175-4591-99e6-871cc8bc601d-kube-api-access-96fzr\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.351943 kubelet[2641]: I0515 09:19:55.348621 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.351943 kubelet[2641]: I0515 09:19:55.349119 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cni-path" (OuterVolumeSpecName: "cni-path") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.351943 kubelet[2641]: I0515 09:19:55.349189 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352084 kubelet[2641]: I0515 09:19:55.349657 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352084 kubelet[2641]: I0515 09:19:55.351953 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352084 kubelet[2641]: I0515 09:19:55.351999 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352084 kubelet[2641]: I0515 09:19:55.352024 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352084 kubelet[2641]: I0515 09:19:55.352054 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352214 kubelet[2641]: I0515 09:19:55.352070 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hostproc" (OuterVolumeSpecName: "hostproc") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352214 kubelet[2641]: I0515 09:19:55.352085 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 09:19:55.352381 kubelet[2641]: I0515 09:19:55.352346 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 09:19:55.352950 kubelet[2641]: I0515 09:19:55.352909 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 09:19:55.353157 kubelet[2641]: I0515 09:19:55.353095 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 09:19:55.356172 kubelet[2641]: I0515 09:19:55.353650 2641 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-kube-api-access-k4fzx" (OuterVolumeSpecName: "kube-api-access-k4fzx") pod "d8c67146-bd2a-4e67-83a0-8fd17ec6b893" (UID: "d8c67146-bd2a-4e67-83a0-8fd17ec6b893"). InnerVolumeSpecName "kube-api-access-k4fzx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 09:19:55.357004 containerd[1474]: time="2025-05-15T09:19:55.356445748Z" level=info msg="RemoveContainer for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" returns successfully" May 15 09:19:55.357102 kubelet[2641]: I0515 09:19:55.356780 2641 scope.go:117] "RemoveContainer" containerID="95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f" May 15 09:19:55.357922 containerd[1474]: time="2025-05-15T09:19:55.357881420Z" level=info msg="RemoveContainer for \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\"" May 15 09:19:55.361781 containerd[1474]: time="2025-05-15T09:19:55.361736387Z" level=info msg="RemoveContainer for \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\" returns successfully" May 15 09:19:55.361982 kubelet[2641]: I0515 09:19:55.361948 2641 scope.go:117] "RemoveContainer" containerID="4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86" May 15 09:19:55.363021 containerd[1474]: time="2025-05-15T09:19:55.362990055Z" level=info msg="RemoveContainer for \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\"" May 15 09:19:55.368572 containerd[1474]: time="2025-05-15T09:19:55.368469458Z" level=info msg="RemoveContainer for \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\" returns successfully" May 15 09:19:55.368860 kubelet[2641]: I0515 09:19:55.368820 2641 scope.go:117] "RemoveContainer" containerID="55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5" May 15 09:19:55.370421 containerd[1474]: time="2025-05-15T09:19:55.370378660Z" level=info msg="RemoveContainer for \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\"" May 15 09:19:55.391847 containerd[1474]: time="2025-05-15T09:19:55.391806660Z" level=info msg="RemoveContainer for \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\" returns successfully" May 15 09:19:55.392053 kubelet[2641]: I0515 09:19:55.392027 2641 scope.go:117] "RemoveContainer" containerID="3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d" May 15 09:19:55.393418 containerd[1474]: time="2025-05-15T09:19:55.393387015Z" level=info msg="RemoveContainer for \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\"" May 15 09:19:55.396056 containerd[1474]: time="2025-05-15T09:19:55.395959473Z" level=info msg="RemoveContainer for \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\" returns successfully" May 15 09:19:55.396175 kubelet[2641]: I0515 09:19:55.396136 2641 scope.go:117] "RemoveContainer" containerID="0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd" May 15 09:19:55.396369 containerd[1474]: time="2025-05-15T09:19:55.396334561Z" level=error msg="ContainerStatus for \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\": not found" May 15 09:19:55.396490 kubelet[2641]: E0515 09:19:55.396465 2641 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\": not found" containerID="0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd" May 15 09:19:55.396574 kubelet[2641]: I0515 09:19:55.396496 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd"} err="failed to get container status \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cb4e6e3d8a265eb107bce073812a2fb01ab1994dbabf2ed38a96d6b73ead1fd\": not found" May 15 09:19:55.396607 kubelet[2641]: I0515 09:19:55.396578 2641 scope.go:117] "RemoveContainer" containerID="95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f" May 15 09:19:55.396721 containerd[1474]: time="2025-05-15T09:19:55.396698930Z" level=error msg="ContainerStatus for \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\": not found" May 15 09:19:55.397046 kubelet[2641]: E0515 09:19:55.396920 2641 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\": not found" containerID="95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f" May 15 09:19:55.397046 kubelet[2641]: I0515 09:19:55.396951 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f"} err="failed to get container status \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\": rpc error: code = NotFound desc = an error occurred when try to find container \"95851f9abbf9154be591b3b5a599418b68041ceb25e0dde8950e2ce46fefc53f\": not found" May 15 09:19:55.397046 kubelet[2641]: I0515 09:19:55.396970 2641 scope.go:117] "RemoveContainer" containerID="4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86" May 15 09:19:55.397191 containerd[1474]: time="2025-05-15T09:19:55.397113579Z" level=error msg="ContainerStatus for \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\": not found" May 15 09:19:55.397277 kubelet[2641]: E0515 09:19:55.397255 2641 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\": not found" containerID="4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86" May 15 09:19:55.397328 kubelet[2641]: I0515 09:19:55.397278 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86"} err="failed to get container status \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f9d33465c765b99c37e0561881991c68699a99fb189e419b383782e1d7cfb86\": not found" May 15 09:19:55.397328 kubelet[2641]: I0515 09:19:55.397299 2641 scope.go:117] "RemoveContainer" containerID="55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5" May 15 09:19:55.397476 containerd[1474]: time="2025-05-15T09:19:55.397432106Z" level=error msg="ContainerStatus for \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\": not found" May 15 09:19:55.397526 kubelet[2641]: E0515 09:19:55.397514 2641 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\": not found" containerID="55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5" May 15 09:19:55.397554 kubelet[2641]: I0515 09:19:55.397529 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5"} err="failed to get container status \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\": rpc error: code = NotFound desc = an error occurred when try to find container \"55fe8cefe4ec90a256ad0964abb9c6e1a0918ce5d94d54f718d38a69ff803ba5\": not found" May 15 09:19:55.397554 kubelet[2641]: I0515 09:19:55.397542 2641 scope.go:117] "RemoveContainer" containerID="3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d" May 15 09:19:55.397803 containerd[1474]: time="2025-05-15T09:19:55.397722193Z" level=error msg="ContainerStatus for \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\": not found" May 15 09:19:55.397849 kubelet[2641]: E0515 09:19:55.397829 2641 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\": not found" containerID="3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d" May 15 09:19:55.397883 kubelet[2641]: I0515 09:19:55.397845 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d"} err="failed to get container status \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ecbf5cdcda27d251d09217f052242dc08c3c5bb0c64fe934699fd6f1202978d\": not found" May 15 09:19:55.397883 kubelet[2641]: I0515 09:19:55.397858 2641 scope.go:117] "RemoveContainer" containerID="5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc" May 15 09:19:55.398912 containerd[1474]: time="2025-05-15T09:19:55.398888059Z" level=info msg="RemoveContainer for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\"" May 15 09:19:55.401104 containerd[1474]: time="2025-05-15T09:19:55.401065347Z" level=info msg="RemoveContainer for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" returns successfully" May 15 09:19:55.401381 kubelet[2641]: I0515 09:19:55.401321 2641 scope.go:117] "RemoveContainer" containerID="5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc" May 15 09:19:55.401621 containerd[1474]: time="2025-05-15T09:19:55.401583999Z" level=error msg="ContainerStatus for \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\": not found" May 15 09:19:55.401757 kubelet[2641]: E0515 09:19:55.401730 2641 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\": not found" containerID="5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc" May 15 09:19:55.401832 kubelet[2641]: I0515 09:19:55.401765 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc"} err="failed to get container status \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ef72dc72d74ca2fbdd741b33057b1b73e98581f0a93e7857ea866f3dde44ffc\": not found" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.449962 2641 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.449997 2641 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.450009 2641 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.450018 2641 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.450026 2641 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.450035 2641 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.450043 2641 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451237 kubelet[2641]: I0515 09:19:55.450051 2641 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451514 kubelet[2641]: I0515 09:19:55.450059 2641 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451514 kubelet[2641]: I0515 09:19:55.450066 2641 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451514 kubelet[2641]: I0515 09:19:55.450074 2641 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-k4fzx\" (UniqueName: \"kubernetes.io/projected/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-kube-api-access-k4fzx\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451514 kubelet[2641]: I0515 09:19:55.450082 2641 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451514 kubelet[2641]: I0515 09:19:55.450089 2641 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.451514 kubelet[2641]: I0515 09:19:55.450097 2641 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d8c67146-bd2a-4e67-83a0-8fd17ec6b893-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 09:19:55.591882 systemd[1]: Removed slice kubepods-burstable-podd8c67146_bd2a_4e67_83a0_8fd17ec6b893.slice - libcontainer container kubepods-burstable-podd8c67146_bd2a_4e67_83a0_8fd17ec6b893.slice. May 15 09:19:55.592029 systemd[1]: kubepods-burstable-podd8c67146_bd2a_4e67_83a0_8fd17ec6b893.slice: Consumed 6.697s CPU time. May 15 09:19:55.931571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64-rootfs.mount: Deactivated successfully. May 15 09:19:55.931663 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ca5e3877073de63bd23b0eb9328b537b6a20fe0ade2f5af858ff57bbb4e2c64-shm.mount: Deactivated successfully. May 15 09:19:55.931720 systemd[1]: var-lib-kubelet-pods-336a67c6\x2d4175\x2d4591\x2d99e6\x2d871cc8bc601d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96fzr.mount: Deactivated successfully. May 15 09:19:55.931772 systemd[1]: var-lib-kubelet-pods-d8c67146\x2dbd2a\x2d4e67\x2d83a0\x2d8fd17ec6b893-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk4fzx.mount: Deactivated successfully. May 15 09:19:55.931827 systemd[1]: var-lib-kubelet-pods-d8c67146\x2dbd2a\x2d4e67\x2d83a0\x2d8fd17ec6b893-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 09:19:55.931877 systemd[1]: var-lib-kubelet-pods-d8c67146\x2dbd2a\x2d4e67\x2d83a0\x2d8fd17ec6b893-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 09:19:56.099227 kubelet[2641]: I0515 09:19:56.099191 2641 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="336a67c6-4175-4591-99e6-871cc8bc601d" path="/var/lib/kubelet/pods/336a67c6-4175-4591-99e6-871cc8bc601d/volumes" May 15 09:19:56.099688 kubelet[2641]: I0515 09:19:56.099593 2641 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" path="/var/lib/kubelet/pods/d8c67146-bd2a-4e67-83a0-8fd17ec6b893/volumes" May 15 09:19:56.862763 sshd[4290]: Connection closed by 10.0.0.1 port 41456 May 15 09:19:56.862765 sshd-session[4288]: pam_unix(sshd:session): session closed for user core May 15 09:19:56.871645 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:41456.service: Deactivated successfully. May 15 09:19:56.873423 systemd[1]: session-23.scope: Deactivated successfully. May 15 09:19:56.873681 systemd[1]: session-23.scope: Consumed 1.414s CPU time. May 15 09:19:56.874898 systemd-logind[1458]: Session 23 logged out. Waiting for processes to exit. May 15 09:19:56.879415 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:41472.service - OpenSSH per-connection server daemon (10.0.0.1:41472). May 15 09:19:56.880366 systemd-logind[1458]: Removed session 23. May 15 09:19:56.916962 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 41472 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:56.918543 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:56.923072 systemd-logind[1458]: New session 24 of user core. May 15 09:19:56.933320 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 09:19:57.151859 kubelet[2641]: E0515 09:19:57.151733 2641 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:19:58.737723 sshd[4453]: Connection closed by 10.0.0.1 port 41472 May 15 09:19:58.737458 sshd-session[4451]: pam_unix(sshd:session): session closed for user core May 15 09:19:58.742564 kubelet[2641]: I0515 09:19:58.742505 2641 topology_manager.go:215] "Topology Admit Handler" podUID="33785d18-fd97-4b35-87c1-84d26dc1900c" podNamespace="kube-system" podName="cilium-4mqkq" May 15 09:19:58.742848 kubelet[2641]: E0515 09:19:58.742640 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" containerName="apply-sysctl-overwrites" May 15 09:19:58.742848 kubelet[2641]: E0515 09:19:58.742650 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="336a67c6-4175-4591-99e6-871cc8bc601d" containerName="cilium-operator" May 15 09:19:58.742848 kubelet[2641]: E0515 09:19:58.742656 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" containerName="mount-bpf-fs" May 15 09:19:58.742848 kubelet[2641]: E0515 09:19:58.742662 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" containerName="clean-cilium-state" May 15 09:19:58.742848 kubelet[2641]: E0515 09:19:58.742668 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" containerName="cilium-agent" May 15 09:19:58.742848 kubelet[2641]: E0515 09:19:58.742674 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" containerName="mount-cgroup" May 15 09:19:58.742848 kubelet[2641]: I0515 09:19:58.742697 2641 memory_manager.go:354] "RemoveStaleState removing state" podUID="336a67c6-4175-4591-99e6-871cc8bc601d" containerName="cilium-operator" May 15 09:19:58.742848 kubelet[2641]: I0515 09:19:58.742703 2641 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8c67146-bd2a-4e67-83a0-8fd17ec6b893" containerName="cilium-agent" May 15 09:19:58.747485 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:41472.service: Deactivated successfully. May 15 09:19:58.749672 systemd[1]: session-24.scope: Deactivated successfully. May 15 09:19:58.751409 systemd[1]: session-24.scope: Consumed 1.726s CPU time. May 15 09:19:58.758353 systemd-logind[1458]: Session 24 logged out. Waiting for processes to exit. May 15 09:19:58.772618 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:41484.service - OpenSSH per-connection server daemon (10.0.0.1:41484). May 15 09:19:58.777193 systemd-logind[1458]: Removed session 24. May 15 09:19:58.780399 systemd[1]: Created slice kubepods-burstable-pod33785d18_fd97_4b35_87c1_84d26dc1900c.slice - libcontainer container kubepods-burstable-pod33785d18_fd97_4b35_87c1_84d26dc1900c.slice. May 15 09:19:58.810824 sshd[4465]: Accepted publickey for core from 10.0.0.1 port 41484 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:58.812055 sshd-session[4465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:58.816015 systemd-logind[1458]: New session 25 of user core. May 15 09:19:58.828319 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 09:19:58.869268 kubelet[2641]: I0515 09:19:58.869195 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-cilium-cgroup\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869268 kubelet[2641]: I0515 09:19:58.869241 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-xtables-lock\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869268 kubelet[2641]: I0515 09:19:58.869263 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-lib-modules\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869268 kubelet[2641]: I0515 09:19:58.869280 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-host-proc-sys-kernel\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869497 kubelet[2641]: I0515 09:19:58.869298 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-host-proc-sys-net\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869497 kubelet[2641]: I0515 09:19:58.869314 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-cni-path\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869497 kubelet[2641]: I0515 09:19:58.869329 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/33785d18-fd97-4b35-87c1-84d26dc1900c-clustermesh-secrets\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869497 kubelet[2641]: I0515 09:19:58.869344 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/33785d18-fd97-4b35-87c1-84d26dc1900c-hubble-tls\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869497 kubelet[2641]: I0515 09:19:58.869359 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-hostproc\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869497 kubelet[2641]: I0515 09:19:58.869381 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-etc-cni-netd\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869613 kubelet[2641]: I0515 09:19:58.869401 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-cilium-run\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869613 kubelet[2641]: I0515 09:19:58.869420 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33785d18-fd97-4b35-87c1-84d26dc1900c-cilium-config-path\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869613 kubelet[2641]: I0515 09:19:58.869435 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/33785d18-fd97-4b35-87c1-84d26dc1900c-cilium-ipsec-secrets\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869613 kubelet[2641]: I0515 09:19:58.869459 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/33785d18-fd97-4b35-87c1-84d26dc1900c-bpf-maps\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.869613 kubelet[2641]: I0515 09:19:58.869475 2641 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bxz\" (UniqueName: \"kubernetes.io/projected/33785d18-fd97-4b35-87c1-84d26dc1900c-kube-api-access-96bxz\") pod \"cilium-4mqkq\" (UID: \"33785d18-fd97-4b35-87c1-84d26dc1900c\") " pod="kube-system/cilium-4mqkq" May 15 09:19:58.882653 sshd[4467]: Connection closed by 10.0.0.1 port 41484 May 15 09:19:58.883331 sshd-session[4465]: pam_unix(sshd:session): session closed for user core May 15 09:19:58.895847 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:41484.service: Deactivated successfully. May 15 09:19:58.897628 systemd[1]: session-25.scope: Deactivated successfully. May 15 09:19:58.899217 systemd-logind[1458]: Session 25 logged out. Waiting for processes to exit. May 15 09:19:58.910419 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:41488.service - OpenSSH per-connection server daemon (10.0.0.1:41488). May 15 09:19:58.911580 systemd-logind[1458]: Removed session 25. May 15 09:19:58.946928 sshd[4473]: Accepted publickey for core from 10.0.0.1 port 41488 ssh2: RSA SHA256:WkIAsgpl9pWuA3CA3XKXwngejn6wwNHDmIkCm2YhEjM May 15 09:19:58.948223 sshd-session[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 09:19:58.951994 systemd-logind[1458]: New session 26 of user core. May 15 09:19:58.959305 systemd[1]: Started session-26.scope - Session 26 of User core. May 15 09:19:59.083672 kubelet[2641]: E0515 09:19:59.083545 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:59.084721 containerd[1474]: time="2025-05-15T09:19:59.084602401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mqkq,Uid:33785d18-fd97-4b35-87c1-84d26dc1900c,Namespace:kube-system,Attempt:0,}" May 15 09:19:59.102233 containerd[1474]: time="2025-05-15T09:19:59.102106721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 09:19:59.102233 containerd[1474]: time="2025-05-15T09:19:59.102180483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 09:19:59.102233 containerd[1474]: time="2025-05-15T09:19:59.102192883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:19:59.102455 containerd[1474]: time="2025-05-15T09:19:59.102268725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 09:19:59.119415 systemd[1]: Started cri-containerd-f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8.scope - libcontainer container f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8. May 15 09:19:59.157420 containerd[1474]: time="2025-05-15T09:19:59.157351978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4mqkq,Uid:33785d18-fd97-4b35-87c1-84d26dc1900c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\"" May 15 09:19:59.158125 kubelet[2641]: E0515 09:19:59.158104 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:19:59.161284 containerd[1474]: time="2025-05-15T09:19:59.161194617Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 09:19:59.331401 containerd[1474]: time="2025-05-15T09:19:59.331339519Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8\"" May 15 09:19:59.331883 containerd[1474]: time="2025-05-15T09:19:59.331853129Z" level=info msg="StartContainer for \"c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8\"" May 15 09:19:59.355339 systemd[1]: Started cri-containerd-c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8.scope - libcontainer container c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8. May 15 09:19:59.378585 containerd[1474]: time="2025-05-15T09:19:59.378518930Z" level=info msg="StartContainer for \"c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8\" returns successfully" May 15 09:19:59.391302 systemd[1]: cri-containerd-c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8.scope: Deactivated successfully. May 15 09:19:59.428869 containerd[1474]: time="2025-05-15T09:19:59.428809924Z" level=info msg="shim disconnected" id=c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8 namespace=k8s.io May 15 09:19:59.428869 containerd[1474]: time="2025-05-15T09:19:59.428862206Z" level=warning msg="cleaning up after shim disconnected" id=c16d815ef704ae9517e2f7048b3dcece580089614fb51dfd79d35dfa6066bdd8 namespace=k8s.io May 15 09:19:59.428869 containerd[1474]: time="2025-05-15T09:19:59.428871766Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:20:00.305092 kubelet[2641]: E0515 09:20:00.304975 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:00.308406 containerd[1474]: time="2025-05-15T09:20:00.308350856Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 09:20:00.328787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243267818.mount: Deactivated successfully. May 15 09:20:00.329823 containerd[1474]: time="2025-05-15T09:20:00.329788088Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72\"" May 15 09:20:00.330639 containerd[1474]: time="2025-05-15T09:20:00.330609184Z" level=info msg="StartContainer for \"78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72\"" May 15 09:20:00.352307 systemd[1]: Started cri-containerd-78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72.scope - libcontainer container 78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72. May 15 09:20:00.376657 containerd[1474]: time="2025-05-15T09:20:00.376609112Z" level=info msg="StartContainer for \"78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72\" returns successfully" May 15 09:20:00.388372 systemd[1]: cri-containerd-78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72.scope: Deactivated successfully. May 15 09:20:00.408340 containerd[1474]: time="2025-05-15T09:20:00.408278950Z" level=info msg="shim disconnected" id=78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72 namespace=k8s.io May 15 09:20:00.408715 containerd[1474]: time="2025-05-15T09:20:00.408549916Z" level=warning msg="cleaning up after shim disconnected" id=78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72 namespace=k8s.io May 15 09:20:00.408715 containerd[1474]: time="2025-05-15T09:20:00.408567236Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:20:00.418177 containerd[1474]: time="2025-05-15T09:20:00.418086948Z" level=warning msg="cleanup warnings time=\"2025-05-15T09:20:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 15 09:20:00.974775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78865d1fbc3d03e206385b03628bd259ceaf16f7bd1d893374888bc8412b5b72-rootfs.mount: Deactivated successfully. May 15 09:20:01.097521 kubelet[2641]: E0515 09:20:01.097490 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:01.318561 kubelet[2641]: E0515 09:20:01.311168 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:01.318864 containerd[1474]: time="2025-05-15T09:20:01.315357590Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 09:20:01.365523 containerd[1474]: time="2025-05-15T09:20:01.365471540Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9\"" May 15 09:20:01.366179 containerd[1474]: time="2025-05-15T09:20:01.366150994Z" level=info msg="StartContainer for \"0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9\"" May 15 09:20:01.404359 systemd[1]: Started cri-containerd-0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9.scope - libcontainer container 0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9. May 15 09:20:01.433613 systemd[1]: cri-containerd-0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9.scope: Deactivated successfully. May 15 09:20:01.447575 containerd[1474]: time="2025-05-15T09:20:01.447226435Z" level=info msg="StartContainer for \"0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9\" returns successfully" May 15 09:20:01.474107 containerd[1474]: time="2025-05-15T09:20:01.474036005Z" level=info msg="shim disconnected" id=0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9 namespace=k8s.io May 15 09:20:01.474107 containerd[1474]: time="2025-05-15T09:20:01.474090326Z" level=warning msg="cleaning up after shim disconnected" id=0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9 namespace=k8s.io May 15 09:20:01.474107 containerd[1474]: time="2025-05-15T09:20:01.474098286Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:20:01.974803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a2173a241b0c745e0e1e5b58fd13e8bea6df834e42be813bf9db29953576ff9-rootfs.mount: Deactivated successfully. May 15 09:20:02.152614 kubelet[2641]: E0515 09:20:02.152568 2641 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 09:20:02.317503 kubelet[2641]: E0515 09:20:02.317279 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:02.320441 containerd[1474]: time="2025-05-15T09:20:02.320305199Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 09:20:02.338218 containerd[1474]: time="2025-05-15T09:20:02.338175025Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285\"" May 15 09:20:02.338985 containerd[1474]: time="2025-05-15T09:20:02.338845918Z" level=info msg="StartContainer for \"7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285\"" May 15 09:20:02.363306 systemd[1]: Started cri-containerd-7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285.scope - libcontainer container 7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285. May 15 09:20:02.381674 systemd[1]: cri-containerd-7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285.scope: Deactivated successfully. May 15 09:20:02.384648 containerd[1474]: time="2025-05-15T09:20:02.384602284Z" level=info msg="StartContainer for \"7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285\" returns successfully" May 15 09:20:02.402979 containerd[1474]: time="2025-05-15T09:20:02.402844957Z" level=info msg="shim disconnected" id=7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285 namespace=k8s.io May 15 09:20:02.402979 containerd[1474]: time="2025-05-15T09:20:02.402905758Z" level=warning msg="cleaning up after shim disconnected" id=7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285 namespace=k8s.io May 15 09:20:02.402979 containerd[1474]: time="2025-05-15T09:20:02.402914438Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 09:20:02.974863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c841494a238061b08d966c31e422715a30a03dd8ba587c5cc22a1e9bc487285-rootfs.mount: Deactivated successfully. May 15 09:20:03.321291 kubelet[2641]: E0515 09:20:03.321185 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:03.324217 containerd[1474]: time="2025-05-15T09:20:03.324174914Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 09:20:03.348250 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22498004.mount: Deactivated successfully. May 15 09:20:03.350299 containerd[1474]: time="2025-05-15T09:20:03.350257089Z" level=info msg="CreateContainer within sandbox \"f9f774a4aeeb5d943cbd6cc4e93b681807ce0bda0903bb00a690d3f2c2396db8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9d82805f50c3eaf7483670e481ac217904b5daee8febc015db0e31951f84b0d\"" May 15 09:20:03.353206 containerd[1474]: time="2025-05-15T09:20:03.352771937Z" level=info msg="StartContainer for \"e9d82805f50c3eaf7483670e481ac217904b5daee8febc015db0e31951f84b0d\"" May 15 09:20:03.388362 systemd[1]: Started cri-containerd-e9d82805f50c3eaf7483670e481ac217904b5daee8febc015db0e31951f84b0d.scope - libcontainer container e9d82805f50c3eaf7483670e481ac217904b5daee8febc015db0e31951f84b0d. May 15 09:20:03.421266 containerd[1474]: time="2025-05-15T09:20:03.421213116Z" level=info msg="StartContainer for \"e9d82805f50c3eaf7483670e481ac217904b5daee8febc015db0e31951f84b0d\" returns successfully" May 15 09:20:03.693179 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 09:20:03.786012 kubelet[2641]: I0515 09:20:03.785641 2641 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T09:20:03Z","lastTransitionTime":"2025-05-15T09:20:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 09:20:04.097651 kubelet[2641]: E0515 09:20:04.097333 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:04.326710 kubelet[2641]: E0515 09:20:04.326270 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:05.330446 kubelet[2641]: E0515 09:20:05.330275 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:06.332409 kubelet[2641]: E0515 09:20:06.332364 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:06.539648 systemd-networkd[1404]: lxc_health: Link UP May 15 09:20:06.545835 systemd-networkd[1404]: lxc_health: Gained carrier May 15 09:20:07.110295 kubelet[2641]: I0515 09:20:07.110236 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4mqkq" podStartSLOduration=9.11021555 podStartE2EDuration="9.11021555s" podCreationTimestamp="2025-05-15 09:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 09:20:04.342238915 +0000 UTC m=+82.340682385" watchObservedRunningTime="2025-05-15 09:20:07.11021555 +0000 UTC m=+85.108659020" May 15 09:20:07.333562 kubelet[2641]: E0515 09:20:07.333514 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:08.181279 systemd-networkd[1404]: lxc_health: Gained IPv6LL May 15 09:20:08.339566 kubelet[2641]: E0515 09:20:08.339519 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:09.338555 kubelet[2641]: E0515 09:20:09.338483 2641 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 09:20:11.718990 sshd[4475]: Connection closed by 10.0.0.1 port 41488 May 15 09:20:11.719498 sshd-session[4473]: pam_unix(sshd:session): session closed for user core May 15 09:20:11.722817 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:41488.service: Deactivated successfully. May 15 09:20:11.724684 systemd[1]: session-26.scope: Deactivated successfully. May 15 09:20:11.725373 systemd-logind[1458]: Session 26 logged out. Waiting for processes to exit. May 15 09:20:11.726294 systemd-logind[1458]: Removed session 26.