Apr 30 00:55:37.936851 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 30 00:55:37.936876 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Apr 29 23:08:45 -00 2025 Apr 30 00:55:37.936886 kernel: KASLR enabled Apr 30 00:55:37.936892 kernel: efi: EFI v2.7 by EDK II Apr 30 00:55:37.936898 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Apr 30 00:55:37.936904 kernel: random: crng init done Apr 30 00:55:37.936911 kernel: ACPI: Early table checksum verification disabled Apr 30 00:55:37.936917 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Apr 30 00:55:37.936924 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 30 00:55:37.937031 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937040 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937046 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937053 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937059 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937067 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937077 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937084 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937090 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 00:55:37.937097 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 30 00:55:37.937104 kernel: NUMA: Failed to initialise from firmware Apr 30 00:55:37.937111 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:55:37.937117 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Apr 30 00:55:37.937124 kernel: Zone ranges: Apr 30 00:55:37.937131 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:55:37.937137 kernel: DMA32 empty Apr 30 00:55:37.937145 kernel: Normal empty Apr 30 00:55:37.937152 kernel: Movable zone start for each node Apr 30 00:55:37.937159 kernel: Early memory node ranges Apr 30 00:55:37.937165 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Apr 30 00:55:37.937172 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Apr 30 00:55:37.937179 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Apr 30 00:55:37.937186 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Apr 30 00:55:37.937193 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Apr 30 00:55:37.937199 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Apr 30 00:55:37.937206 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 30 00:55:37.937212 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 30 00:55:37.937219 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 30 00:55:37.937227 kernel: psci: probing for conduit method from ACPI. Apr 30 00:55:37.937234 kernel: psci: PSCIv1.1 detected in firmware. Apr 30 00:55:37.937241 kernel: psci: Using standard PSCI v0.2 function IDs Apr 30 00:55:37.937250 kernel: psci: Trusted OS migration not required Apr 30 00:55:37.937257 kernel: psci: SMC Calling Convention v1.1 Apr 30 00:55:37.937265 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 30 00:55:37.937273 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 30 00:55:37.937281 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 30 00:55:37.937288 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 30 00:55:37.937295 kernel: Detected PIPT I-cache on CPU0 Apr 30 00:55:37.937302 kernel: CPU features: detected: GIC system register CPU interface Apr 30 00:55:37.937309 kernel: CPU features: detected: Hardware dirty bit management Apr 30 00:55:37.937316 kernel: CPU features: detected: Spectre-v4 Apr 30 00:55:37.937323 kernel: CPU features: detected: Spectre-BHB Apr 30 00:55:37.937330 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 30 00:55:37.937337 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 30 00:55:37.937346 kernel: CPU features: detected: ARM erratum 1418040 Apr 30 00:55:37.937353 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 30 00:55:37.937360 kernel: alternatives: applying boot alternatives Apr 30 00:55:37.937369 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:55:37.937376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 00:55:37.937384 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 30 00:55:37.937391 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 00:55:37.937398 kernel: Fallback order for Node 0: 0 Apr 30 00:55:37.937405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 30 00:55:37.937412 kernel: Policy zone: DMA Apr 30 00:55:37.937419 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 00:55:37.937428 kernel: software IO TLB: area num 4. Apr 30 00:55:37.937452 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Apr 30 00:55:37.937461 kernel: Memory: 2386468K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185820K reserved, 0K cma-reserved) Apr 30 00:55:37.937468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 30 00:55:37.937475 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 00:55:37.937483 kernel: rcu: RCU event tracing is enabled. Apr 30 00:55:37.937490 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 30 00:55:37.937498 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 00:55:37.937505 kernel: Tracing variant of Tasks RCU enabled. Apr 30 00:55:37.937512 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 00:55:37.937519 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 30 00:55:37.937526 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 30 00:55:37.937536 kernel: GICv3: 256 SPIs implemented Apr 30 00:55:37.937543 kernel: GICv3: 0 Extended SPIs implemented Apr 30 00:55:37.937550 kernel: Root IRQ handler: gic_handle_irq Apr 30 00:55:37.937558 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 30 00:55:37.937565 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 30 00:55:37.937572 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 30 00:55:37.937579 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 30 00:55:37.937587 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 30 00:55:37.937594 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 30 00:55:37.937601 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 30 00:55:37.937608 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 00:55:37.937617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:55:37.937625 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 30 00:55:37.937638 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 30 00:55:37.937645 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 30 00:55:37.937652 kernel: arm-pv: using stolen time PV Apr 30 00:55:37.937660 kernel: Console: colour dummy device 80x25 Apr 30 00:55:37.937667 kernel: ACPI: Core revision 20230628 Apr 30 00:55:37.937675 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 30 00:55:37.937682 kernel: pid_max: default: 32768 minimum: 301 Apr 30 00:55:37.937689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 00:55:37.937698 kernel: landlock: Up and running. Apr 30 00:55:37.937706 kernel: SELinux: Initializing. Apr 30 00:55:37.937713 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:55:37.937721 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 30 00:55:37.937728 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:55:37.937736 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 30 00:55:37.937743 kernel: rcu: Hierarchical SRCU implementation. Apr 30 00:55:37.937750 kernel: rcu: Max phase no-delay instances is 400. Apr 30 00:55:37.937757 kernel: Platform MSI: ITS@0x8080000 domain created Apr 30 00:55:37.937766 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 30 00:55:37.937773 kernel: Remapping and enabling EFI services. Apr 30 00:55:37.937780 kernel: smp: Bringing up secondary CPUs ... Apr 30 00:55:37.937788 kernel: Detected PIPT I-cache on CPU1 Apr 30 00:55:37.937795 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 30 00:55:37.937803 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 30 00:55:37.937810 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:55:37.937818 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 30 00:55:37.937825 kernel: Detected PIPT I-cache on CPU2 Apr 30 00:55:37.937833 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 30 00:55:37.937842 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 30 00:55:37.937850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:55:37.937863 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 30 00:55:37.937873 kernel: Detected PIPT I-cache on CPU3 Apr 30 00:55:37.937880 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 30 00:55:37.937888 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 30 00:55:37.937896 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 30 00:55:37.937903 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 30 00:55:37.937911 kernel: smp: Brought up 1 node, 4 CPUs Apr 30 00:55:37.937921 kernel: SMP: Total of 4 processors activated. Apr 30 00:55:37.937930 kernel: CPU features: detected: 32-bit EL0 Support Apr 30 00:55:37.937938 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 30 00:55:37.937946 kernel: CPU features: detected: Common not Private translations Apr 30 00:55:37.937954 kernel: CPU features: detected: CRC32 instructions Apr 30 00:55:37.937962 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 30 00:55:37.937970 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 30 00:55:37.937978 kernel: CPU features: detected: LSE atomic instructions Apr 30 00:55:37.938005 kernel: CPU features: detected: Privileged Access Never Apr 30 00:55:37.938014 kernel: CPU features: detected: RAS Extension Support Apr 30 00:55:37.938022 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 30 00:55:37.938029 kernel: CPU: All CPU(s) started at EL1 Apr 30 00:55:37.938037 kernel: alternatives: applying system-wide alternatives Apr 30 00:55:37.938045 kernel: devtmpfs: initialized Apr 30 00:55:37.938053 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 00:55:37.938073 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 30 00:55:37.938084 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 00:55:37.938095 kernel: SMBIOS 3.0.0 present. Apr 30 00:55:37.938103 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Apr 30 00:55:37.938111 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 00:55:37.938119 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 30 00:55:37.938127 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 30 00:55:37.938135 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 30 00:55:37.938142 kernel: audit: initializing netlink subsys (disabled) Apr 30 00:55:37.938150 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Apr 30 00:55:37.938158 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 00:55:37.938167 kernel: cpuidle: using governor menu Apr 30 00:55:37.938175 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 30 00:55:37.938183 kernel: ASID allocator initialised with 32768 entries Apr 30 00:55:37.938190 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 00:55:37.938198 kernel: Serial: AMBA PL011 UART driver Apr 30 00:55:37.938206 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 30 00:55:37.938214 kernel: Modules: 0 pages in range for non-PLT usage Apr 30 00:55:37.938222 kernel: Modules: 509024 pages in range for PLT usage Apr 30 00:55:37.938229 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 00:55:37.938238 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 00:55:37.938246 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 30 00:55:37.938254 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 30 00:55:37.938261 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 00:55:37.938269 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 00:55:37.938278 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 30 00:55:37.938286 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 30 00:55:37.938293 kernel: ACPI: Added _OSI(Module Device) Apr 30 00:55:37.938301 kernel: ACPI: Added _OSI(Processor Device) Apr 30 00:55:37.938310 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 00:55:37.938318 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 00:55:37.938326 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 00:55:37.938333 kernel: ACPI: Interpreter enabled Apr 30 00:55:37.938341 kernel: ACPI: Using GIC for interrupt routing Apr 30 00:55:37.938348 kernel: ACPI: MCFG table detected, 1 entries Apr 30 00:55:37.938356 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 30 00:55:37.938364 kernel: printk: console [ttyAMA0] enabled Apr 30 00:55:37.938372 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 00:55:37.938598 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 00:55:37.938686 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 00:55:37.938757 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 00:55:37.938827 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 30 00:55:37.938896 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 30 00:55:37.938907 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 30 00:55:37.938914 kernel: PCI host bridge to bus 0000:00 Apr 30 00:55:37.938995 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 30 00:55:37.939102 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 30 00:55:37.939200 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 30 00:55:37.939267 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 00:55:37.939359 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 30 00:55:37.939521 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 30 00:55:37.939613 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 30 00:55:37.939685 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 30 00:55:37.939755 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:55:37.939826 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 30 00:55:37.939916 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 30 00:55:37.939998 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 30 00:55:37.940138 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 30 00:55:37.940215 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 30 00:55:37.940302 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 30 00:55:37.940314 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 30 00:55:37.940323 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 30 00:55:37.940331 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 30 00:55:37.940339 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 30 00:55:37.940347 kernel: iommu: Default domain type: Translated Apr 30 00:55:37.940356 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 30 00:55:37.940364 kernel: efivars: Registered efivars operations Apr 30 00:55:37.940374 kernel: vgaarb: loaded Apr 30 00:55:37.940382 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 30 00:55:37.940390 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 00:55:37.940399 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 00:55:37.940407 kernel: pnp: PnP ACPI init Apr 30 00:55:37.940514 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 30 00:55:37.940528 kernel: pnp: PnP ACPI: found 1 devices Apr 30 00:55:37.940537 kernel: NET: Registered PF_INET protocol family Apr 30 00:55:37.940548 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 30 00:55:37.940556 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 30 00:55:37.940565 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 00:55:37.940573 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 00:55:37.940581 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 30 00:55:37.940589 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 30 00:55:37.940597 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:55:37.940605 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 30 00:55:37.940614 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 00:55:37.940623 kernel: PCI: CLS 0 bytes, default 64 Apr 30 00:55:37.940632 kernel: kvm [1]: HYP mode not available Apr 30 00:55:37.940640 kernel: Initialise system trusted keyrings Apr 30 00:55:37.940648 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 30 00:55:37.940657 kernel: Key type asymmetric registered Apr 30 00:55:37.940665 kernel: Asymmetric key parser 'x509' registered Apr 30 00:55:37.940673 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 30 00:55:37.940681 kernel: io scheduler mq-deadline registered Apr 30 00:55:37.940689 kernel: io scheduler kyber registered Apr 30 00:55:37.940698 kernel: io scheduler bfq registered Apr 30 00:55:37.940706 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 30 00:55:37.940715 kernel: ACPI: button: Power Button [PWRB] Apr 30 00:55:37.940723 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 30 00:55:37.940802 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 30 00:55:37.940813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 00:55:37.940822 kernel: thunder_xcv, ver 1.0 Apr 30 00:55:37.940830 kernel: thunder_bgx, ver 1.0 Apr 30 00:55:37.940838 kernel: nicpf, ver 1.0 Apr 30 00:55:37.940848 kernel: nicvf, ver 1.0 Apr 30 00:55:37.940931 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 30 00:55:37.941002 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-30T00:55:37 UTC (1745974537) Apr 30 00:55:37.941013 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 00:55:37.941022 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 30 00:55:37.941030 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 30 00:55:37.941038 kernel: watchdog: Hard watchdog permanently disabled Apr 30 00:55:37.941047 kernel: NET: Registered PF_INET6 protocol family Apr 30 00:55:37.941058 kernel: Segment Routing with IPv6 Apr 30 00:55:37.941066 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 00:55:37.941074 kernel: NET: Registered PF_PACKET protocol family Apr 30 00:55:37.941081 kernel: Key type dns_resolver registered Apr 30 00:55:37.941090 kernel: registered taskstats version 1 Apr 30 00:55:37.941098 kernel: Loading compiled-in X.509 certificates Apr 30 00:55:37.941106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: e2b28159d3a83b6f5d5db45519e470b1b834e378' Apr 30 00:55:37.941114 kernel: Key type .fscrypt registered Apr 30 00:55:37.941122 kernel: Key type fscrypt-provisioning registered Apr 30 00:55:37.941131 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 00:55:37.941139 kernel: ima: Allocated hash algorithm: sha1 Apr 30 00:55:37.941147 kernel: ima: No architecture policies found Apr 30 00:55:37.941183 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 30 00:55:37.941193 kernel: clk: Disabling unused clocks Apr 30 00:55:37.941202 kernel: Freeing unused kernel memory: 39424K Apr 30 00:55:37.941210 kernel: Run /init as init process Apr 30 00:55:37.941218 kernel: with arguments: Apr 30 00:55:37.941226 kernel: /init Apr 30 00:55:37.941251 kernel: with environment: Apr 30 00:55:37.941262 kernel: HOME=/ Apr 30 00:55:37.941270 kernel: TERM=linux Apr 30 00:55:37.941278 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 00:55:37.941288 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:55:37.941299 systemd[1]: Detected virtualization kvm. Apr 30 00:55:37.941308 systemd[1]: Detected architecture arm64. Apr 30 00:55:37.941316 systemd[1]: Running in initrd. Apr 30 00:55:37.941327 systemd[1]: No hostname configured, using default hostname. Apr 30 00:55:37.941335 systemd[1]: Hostname set to . Apr 30 00:55:37.941344 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:55:37.941353 systemd[1]: Queued start job for default target initrd.target. Apr 30 00:55:37.941362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:55:37.941372 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:55:37.941381 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 00:55:37.941390 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:55:37.941401 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 00:55:37.941410 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 00:55:37.941420 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 00:55:37.941429 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 00:55:37.941454 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:55:37.941465 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:55:37.941478 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:55:37.941491 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:55:37.941501 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:55:37.941510 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:55:37.941519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:55:37.941528 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:55:37.941537 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:55:37.941546 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:55:37.941555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:55:37.941565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:55:37.941574 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:55:37.941582 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:55:37.941591 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 00:55:37.941600 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:55:37.941609 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 00:55:37.941618 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 00:55:37.941626 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:55:37.941635 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:55:37.941645 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:37.941654 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 00:55:37.941663 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:55:37.941671 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 00:55:37.941681 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:55:37.941719 systemd-journald[238]: Collecting audit messages is disabled. Apr 30 00:55:37.941741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:37.941750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:55:37.941762 systemd-journald[238]: Journal started Apr 30 00:55:37.941782 systemd-journald[238]: Runtime Journal (/run/log/journal/a4e5b93b67364e0bb5ed43c85d823a11) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:55:37.929321 systemd-modules-load[239]: Inserted module 'overlay' Apr 30 00:55:37.944537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 00:55:37.946195 systemd-modules-load[239]: Inserted module 'br_netfilter' Apr 30 00:55:37.947946 kernel: Bridge firewalling registered Apr 30 00:55:37.947969 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:55:37.949388 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:55:37.950834 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:55:37.955396 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:55:37.957176 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:55:37.959649 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:55:37.969762 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:55:37.972544 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:55:37.976492 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:37.977985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:55:37.989722 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 00:55:37.992152 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:55:38.000929 dracut-cmdline[275]: dracut-dracut-053 Apr 30 00:55:38.003658 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f2ec97241771b99b21726307071be4f8c5924f9157dc58cd38c4fcfbe71412a Apr 30 00:55:38.019342 systemd-resolved[276]: Positive Trust Anchors: Apr 30 00:55:38.019364 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:55:38.019395 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:55:38.024274 systemd-resolved[276]: Defaulting to hostname 'linux'. Apr 30 00:55:38.025293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:55:38.028890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:55:38.075481 kernel: SCSI subsystem initialized Apr 30 00:55:38.080457 kernel: Loading iSCSI transport class v2.0-870. Apr 30 00:55:38.088487 kernel: iscsi: registered transport (tcp) Apr 30 00:55:38.101477 kernel: iscsi: registered transport (qla4xxx) Apr 30 00:55:38.101511 kernel: QLogic iSCSI HBA Driver Apr 30 00:55:38.149518 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 00:55:38.160641 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 00:55:38.176833 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 00:55:38.176900 kernel: device-mapper: uevent: version 1.0.3 Apr 30 00:55:38.176912 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 00:55:38.226480 kernel: raid6: neonx8 gen() 15766 MB/s Apr 30 00:55:38.243458 kernel: raid6: neonx4 gen() 15638 MB/s Apr 30 00:55:38.260467 kernel: raid6: neonx2 gen() 13318 MB/s Apr 30 00:55:38.277461 kernel: raid6: neonx1 gen() 10483 MB/s Apr 30 00:55:38.294460 kernel: raid6: int64x8 gen() 6963 MB/s Apr 30 00:55:38.311470 kernel: raid6: int64x4 gen() 7341 MB/s Apr 30 00:55:38.328467 kernel: raid6: int64x2 gen() 6118 MB/s Apr 30 00:55:38.345750 kernel: raid6: int64x1 gen() 5046 MB/s Apr 30 00:55:38.345808 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Apr 30 00:55:38.363696 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Apr 30 00:55:38.363732 kernel: raid6: using neon recovery algorithm Apr 30 00:55:38.370859 kernel: xor: measuring software checksum speed Apr 30 00:55:38.370896 kernel: 8regs : 19764 MB/sec Apr 30 00:55:38.371552 kernel: 32regs : 19547 MB/sec Apr 30 00:55:38.372873 kernel: arm64_neon : 26963 MB/sec Apr 30 00:55:38.372895 kernel: xor: using function: arm64_neon (26963 MB/sec) Apr 30 00:55:38.426714 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 00:55:38.440993 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:55:38.452656 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:55:38.465517 systemd-udevd[460]: Using default interface naming scheme 'v255'. Apr 30 00:55:38.468790 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:55:38.477696 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 00:55:38.492953 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Apr 30 00:55:38.525995 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:55:38.542678 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:55:38.590366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:55:38.601645 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 00:55:38.614477 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 00:55:38.616565 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:55:38.618037 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:55:38.620509 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:55:38.630535 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 00:55:38.636502 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 30 00:55:38.664936 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 30 00:55:38.665058 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 00:55:38.665080 kernel: GPT:9289727 != 19775487 Apr 30 00:55:38.665094 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 00:55:38.665104 kernel: GPT:9289727 != 19775487 Apr 30 00:55:38.665114 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 00:55:38.665124 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:55:38.639073 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:55:38.648141 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:55:38.648343 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:38.649906 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:55:38.653265 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:55:38.653418 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:38.654882 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:38.668269 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:38.681881 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:38.686461 kernel: BTRFS: device fsid 7216ceb7-401c-42de-84de-44adb68241e4 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (520) Apr 30 00:55:38.689579 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (515) Apr 30 00:55:38.690578 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 00:55:38.701485 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 00:55:38.705893 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 00:55:38.707210 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 00:55:38.713867 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:55:38.729483 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 00:55:38.731357 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 00:55:38.736725 disk-uuid[556]: Primary Header is updated. Apr 30 00:55:38.736725 disk-uuid[556]: Secondary Entries is updated. Apr 30 00:55:38.736725 disk-uuid[556]: Secondary Header is updated. Apr 30 00:55:38.744482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:55:38.748475 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:55:38.751480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:55:38.753272 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:39.752360 disk-uuid[557]: The operation has completed successfully. Apr 30 00:55:39.753674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 00:55:39.774774 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 00:55:39.774890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 00:55:39.807377 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 00:55:39.810229 sh[579]: Success Apr 30 00:55:39.824285 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 30 00:55:39.857824 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 00:55:39.866848 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 00:55:39.869462 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 00:55:39.879907 kernel: BTRFS info (device dm-0): first mount of filesystem 7216ceb7-401c-42de-84de-44adb68241e4 Apr 30 00:55:39.879956 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:39.879968 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 00:55:39.881984 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 00:55:39.882002 kernel: BTRFS info (device dm-0): using free space tree Apr 30 00:55:39.886458 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 00:55:39.887985 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 00:55:39.900663 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 00:55:39.902362 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 00:55:39.912831 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:39.912879 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:39.912891 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:55:39.916466 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:55:39.932326 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 30 00:55:39.934170 kernel: BTRFS info (device vda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:39.943497 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 00:55:39.955633 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 00:55:40.043390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:55:40.066701 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:55:40.107312 systemd-networkd[765]: lo: Link UP Apr 30 00:55:40.107327 systemd-networkd[765]: lo: Gained carrier Apr 30 00:55:40.108037 systemd-networkd[765]: Enumeration completed Apr 30 00:55:40.108606 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:40.108609 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:55:40.109551 systemd-networkd[765]: eth0: Link UP Apr 30 00:55:40.109554 systemd-networkd[765]: eth0: Gained carrier Apr 30 00:55:40.109561 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:40.109973 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:55:40.111224 systemd[1]: Reached target network.target - Network. Apr 30 00:55:40.132494 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:55:40.144053 ignition[680]: Ignition 2.19.0 Apr 30 00:55:40.144062 ignition[680]: Stage: fetch-offline Apr 30 00:55:40.144098 ignition[680]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:40.144107 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:55:40.144261 ignition[680]: parsed url from cmdline: "" Apr 30 00:55:40.144264 ignition[680]: no config URL provided Apr 30 00:55:40.144269 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 00:55:40.144276 ignition[680]: no config at "/usr/lib/ignition/user.ign" Apr 30 00:55:40.144299 ignition[680]: op(1): [started] loading QEMU firmware config module Apr 30 00:55:40.144303 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 30 00:55:40.154365 ignition[680]: op(1): [finished] loading QEMU firmware config module Apr 30 00:55:40.194189 ignition[680]: parsing config with SHA512: 50e6ed6ed391801f509b732ef4dd088e619b2b191e444c4d3cd447bdf0f70dc9c5879af709e430cc4b3364af896f2d277cc00a2e92b589838c8f49cbfadffe09 Apr 30 00:55:40.199624 unknown[680]: fetched base config from "system" Apr 30 00:55:40.200359 ignition[680]: fetch-offline: fetch-offline passed Apr 30 00:55:40.199638 unknown[680]: fetched user config from "qemu" Apr 30 00:55:40.200473 ignition[680]: Ignition finished successfully Apr 30 00:55:40.201600 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:55:40.203814 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 30 00:55:40.214634 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 00:55:40.226062 ignition[779]: Ignition 2.19.0 Apr 30 00:55:40.226071 ignition[779]: Stage: kargs Apr 30 00:55:40.226248 ignition[779]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:40.226257 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:55:40.227357 ignition[779]: kargs: kargs passed Apr 30 00:55:40.227407 ignition[779]: Ignition finished successfully Apr 30 00:55:40.230707 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 00:55:40.233371 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 00:55:40.247792 ignition[787]: Ignition 2.19.0 Apr 30 00:55:40.247804 ignition[787]: Stage: disks Apr 30 00:55:40.247967 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:40.247977 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:55:40.250847 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 00:55:40.248913 ignition[787]: disks: disks passed Apr 30 00:55:40.252635 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 00:55:40.248958 ignition[787]: Ignition finished successfully Apr 30 00:55:40.254437 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:55:40.256246 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:55:40.258174 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:55:40.259854 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:55:40.279606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 00:55:40.289844 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 30 00:55:40.293840 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 00:55:40.296336 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 00:55:40.343461 kernel: EXT4-fs (vda9): mounted filesystem c13301f3-70ec-4948-963a-f1db0e953273 r/w with ordered data mode. Quota mode: none. Apr 30 00:55:40.343722 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 00:55:40.345023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 00:55:40.356524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:55:40.358278 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 00:55:40.359678 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 00:55:40.359717 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 00:55:40.369474 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) Apr 30 00:55:40.369501 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:40.369513 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:40.369523 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:55:40.359739 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:55:40.373168 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:55:40.364200 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 00:55:40.365936 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 00:55:40.375906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:55:40.411488 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 00:55:40.414750 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Apr 30 00:55:40.418719 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 00:55:40.422403 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 00:55:40.493541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 00:55:40.503559 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 00:55:40.505110 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 00:55:40.510464 kernel: BTRFS info (device vda6): last unmount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:40.523537 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 00:55:40.528303 ignition[919]: INFO : Ignition 2.19.0 Apr 30 00:55:40.528303 ignition[919]: INFO : Stage: mount Apr 30 00:55:40.530901 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:40.530901 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:55:40.530901 ignition[919]: INFO : mount: mount passed Apr 30 00:55:40.530901 ignition[919]: INFO : Ignition finished successfully Apr 30 00:55:40.531322 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 00:55:40.537550 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 00:55:40.878655 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 00:55:40.893636 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 00:55:40.899469 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (932) Apr 30 00:55:40.903586 kernel: BTRFS info (device vda6): first mount of filesystem ece78588-c2c6-41f3-bdc2-614da63113c1 Apr 30 00:55:40.903607 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 30 00:55:40.903618 kernel: BTRFS info (device vda6): using free space tree Apr 30 00:55:40.910456 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 00:55:40.912109 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 00:55:40.934959 ignition[949]: INFO : Ignition 2.19.0 Apr 30 00:55:40.934959 ignition[949]: INFO : Stage: files Apr 30 00:55:40.936699 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:40.936699 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:55:40.936699 ignition[949]: DEBUG : files: compiled without relabeling support, skipping Apr 30 00:55:40.940140 ignition[949]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 00:55:40.940140 ignition[949]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 00:55:40.943843 ignition[949]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 00:55:40.945215 ignition[949]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 00:55:40.945215 ignition[949]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 00:55:40.944436 unknown[949]: wrote ssh authorized keys file for user: core Apr 30 00:55:40.949309 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:55:40.949309 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 30 00:55:40.949309 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:55:40.949309 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 30 00:55:41.050698 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 30 00:55:41.158331 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 30 00:55:41.158331 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:55:41.162208 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Apr 30 00:55:41.501629 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 30 00:55:41.712074 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:55:41.713892 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 30 00:55:41.846636 systemd-networkd[765]: eth0: Gained IPv6LL Apr 30 00:55:41.958922 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 30 00:55:42.357106 ignition[949]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 30 00:55:42.357106 ignition[949]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 30 00:55:42.360743 ignition[949]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 30 00:55:42.387751 ignition[949]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:55:42.391598 ignition[949]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 30 00:55:42.393153 ignition[949]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 30 00:55:42.393153 ignition[949]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 30 00:55:42.393153 ignition[949]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 30 00:55:42.393153 ignition[949]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:55:42.393153 ignition[949]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 00:55:42.393153 ignition[949]: INFO : files: files passed Apr 30 00:55:42.393153 ignition[949]: INFO : Ignition finished successfully Apr 30 00:55:42.396090 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 00:55:42.411656 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 00:55:42.414233 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 00:55:42.415710 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 00:55:42.417541 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 00:55:42.422133 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Apr 30 00:55:42.426069 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:55:42.426069 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:55:42.429471 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 00:55:42.429764 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:55:42.432371 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 00:55:42.445642 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 00:55:42.463020 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 00:55:42.463125 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 00:55:42.465318 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 00:55:42.467180 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 00:55:42.468993 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 00:55:42.469724 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 00:55:42.484754 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:55:42.492601 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 00:55:42.500293 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:55:42.501575 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:55:42.503676 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 00:55:42.505469 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 00:55:42.505593 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 00:55:42.508104 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 00:55:42.510105 systemd[1]: Stopped target basic.target - Basic System. Apr 30 00:55:42.511809 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 00:55:42.513548 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 00:55:42.515519 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 00:55:42.517548 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 00:55:42.519415 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 00:55:42.521434 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 00:55:42.523411 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 00:55:42.525190 systemd[1]: Stopped target swap.target - Swaps. Apr 30 00:55:42.526805 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 00:55:42.526920 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 00:55:42.529282 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:55:42.530469 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:55:42.532414 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 00:55:42.536509 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:55:42.537742 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 00:55:42.537854 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 00:55:42.540630 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 00:55:42.540743 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 00:55:42.542711 systemd[1]: Stopped target paths.target - Path Units. Apr 30 00:55:42.544320 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 00:55:42.547513 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:55:42.548763 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 00:55:42.550841 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 00:55:42.552393 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 00:55:42.552507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 00:55:42.554174 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 00:55:42.554255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 00:55:42.555790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 00:55:42.555898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 00:55:42.557658 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 00:55:42.557757 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 00:55:42.569592 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 00:55:42.570492 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 00:55:42.570618 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:55:42.573760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 00:55:42.575278 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 00:55:42.575400 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:55:42.578782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 00:55:42.582453 ignition[1005]: INFO : Ignition 2.19.0 Apr 30 00:55:42.582453 ignition[1005]: INFO : Stage: umount Apr 30 00:55:42.582453 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 00:55:42.582453 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 30 00:55:42.579049 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 00:55:42.588610 ignition[1005]: INFO : umount: umount passed Apr 30 00:55:42.588610 ignition[1005]: INFO : Ignition finished successfully Apr 30 00:55:42.585381 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 00:55:42.586512 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 00:55:42.589782 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 00:55:42.590231 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 00:55:42.590317 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 00:55:42.592557 systemd[1]: Stopped target network.target - Network. Apr 30 00:55:42.593938 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 00:55:42.594010 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 00:55:42.595803 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 00:55:42.595851 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 00:55:42.597530 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 00:55:42.597576 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 00:55:42.599480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 00:55:42.599528 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 00:55:42.602129 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 00:55:42.603731 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 00:55:42.608125 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 00:55:42.608242 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 00:55:42.610217 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 00:55:42.610265 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:55:42.610501 systemd-networkd[765]: eth0: DHCPv6 lease lost Apr 30 00:55:42.611888 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 00:55:42.613480 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 00:55:42.615943 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 00:55:42.615994 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:55:42.627610 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 00:55:42.628518 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 00:55:42.628580 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 00:55:42.630684 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:55:42.630728 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:55:42.632641 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 00:55:42.632687 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 00:55:42.634972 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:55:42.648950 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 00:55:42.649061 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 00:55:42.653109 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 00:55:42.653241 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:55:42.655287 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 00:55:42.655344 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 00:55:42.656514 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 00:55:42.656549 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:55:42.658575 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 00:55:42.658622 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 00:55:42.661321 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 00:55:42.661367 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 00:55:42.664159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 00:55:42.664201 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 00:55:42.678642 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 00:55:42.679698 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 00:55:42.679757 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:55:42.681950 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 00:55:42.681997 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:42.684336 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 00:55:42.685478 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 00:55:42.687324 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 00:55:42.687402 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 00:55:42.689857 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 00:55:42.690908 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 00:55:42.690973 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 00:55:42.693483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 00:55:42.703087 systemd[1]: Switching root. Apr 30 00:55:42.737338 systemd-journald[238]: Journal stopped Apr 30 00:55:43.545384 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Apr 30 00:55:43.545644 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 00:55:43.545664 kernel: SELinux: policy capability open_perms=1 Apr 30 00:55:43.545674 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 00:55:43.545685 kernel: SELinux: policy capability always_check_network=0 Apr 30 00:55:43.545695 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 00:55:43.545704 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 00:55:43.545718 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 00:55:43.545728 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 00:55:43.545738 kernel: audit: type=1403 audit(1745974542.926:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 00:55:43.545750 systemd[1]: Successfully loaded SELinux policy in 35.777ms. Apr 30 00:55:43.545770 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.689ms. Apr 30 00:55:43.545782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 30 00:55:43.545794 systemd[1]: Detected virtualization kvm. Apr 30 00:55:43.545804 systemd[1]: Detected architecture arm64. Apr 30 00:55:43.545815 systemd[1]: Detected first boot. Apr 30 00:55:43.545827 systemd[1]: Initializing machine ID from VM UUID. Apr 30 00:55:43.545838 zram_generator::config[1072]: No configuration found. Apr 30 00:55:43.545852 systemd[1]: Populated /etc with preset unit settings. Apr 30 00:55:43.545863 systemd[1]: Queued start job for default target multi-user.target. Apr 30 00:55:43.545877 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 00:55:43.545888 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 00:55:43.545900 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 00:55:43.545910 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 00:55:43.545922 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 00:55:43.545933 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 00:55:43.545944 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 00:55:43.545955 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 00:55:43.545966 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 00:55:43.545976 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 00:55:43.545988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 00:55:43.545998 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 00:55:43.546009 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 00:55:43.546022 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 00:55:43.546033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 00:55:43.546044 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 30 00:55:43.546054 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 00:55:43.546066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 00:55:43.546076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 00:55:43.546089 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 00:55:43.546100 systemd[1]: Reached target slices.target - Slice Units. Apr 30 00:55:43.546113 systemd[1]: Reached target swap.target - Swaps. Apr 30 00:55:43.546123 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 00:55:43.546134 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 00:55:43.546145 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 00:55:43.546156 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 30 00:55:43.546166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 00:55:43.546177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 00:55:43.546188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 00:55:43.546198 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 00:55:43.546210 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 00:55:43.546221 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 00:55:43.546232 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 00:55:43.546243 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 00:55:43.546254 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 00:55:43.546266 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 00:55:43.546277 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 00:55:43.546288 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:43.546299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 00:55:43.546311 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 00:55:43.546322 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:43.546333 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:55:43.546344 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:43.546354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 00:55:43.546365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:43.546376 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 00:55:43.546386 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 30 00:55:43.546399 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 30 00:55:43.546410 kernel: fuse: init (API version 7.39) Apr 30 00:55:43.546429 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 00:55:43.546448 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 00:55:43.546461 kernel: ACPI: bus type drm_connector registered Apr 30 00:55:43.546471 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 00:55:43.546482 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 00:55:43.546493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 00:55:43.546503 kernel: loop: module loaded Apr 30 00:55:43.546534 systemd-journald[1150]: Collecting audit messages is disabled. Apr 30 00:55:43.546557 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 00:55:43.546568 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 00:55:43.546579 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 00:55:43.546589 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 00:55:43.546602 systemd-journald[1150]: Journal started Apr 30 00:55:43.546625 systemd-journald[1150]: Runtime Journal (/run/log/journal/a4e5b93b67364e0bb5ed43c85d823a11) is 5.9M, max 47.3M, 41.4M free. Apr 30 00:55:43.548572 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 00:55:43.550373 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 00:55:43.551790 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 00:55:43.553175 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 00:55:43.554836 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 00:55:43.556521 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 00:55:43.556712 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 00:55:43.558206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:43.558385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:43.559928 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:55:43.560114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:55:43.561562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:43.561736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:43.563540 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 00:55:43.563707 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 00:55:43.565063 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:43.565282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:43.567038 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 00:55:43.568684 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 00:55:43.570529 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 00:55:43.582787 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 00:55:43.593569 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 00:55:43.595827 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 00:55:43.597013 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 00:55:43.602776 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 00:55:43.606054 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 00:55:43.607559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:55:43.608611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 00:55:43.609880 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:55:43.612608 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:55:43.616639 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 00:55:43.620015 systemd-journald[1150]: Time spent on flushing to /var/log/journal/a4e5b93b67364e0bb5ed43c85d823a11 is 22.526ms for 849 entries. Apr 30 00:55:43.620015 systemd-journald[1150]: System Journal (/var/log/journal/a4e5b93b67364e0bb5ed43c85d823a11) is 8.0M, max 195.6M, 187.6M free. Apr 30 00:55:43.650229 systemd-journald[1150]: Received client request to flush runtime journal. Apr 30 00:55:43.621017 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 00:55:43.622786 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 00:55:43.624115 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 00:55:43.634050 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 00:55:43.639975 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 00:55:43.645381 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 00:55:43.646958 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:55:43.658113 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 00:55:43.661849 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Apr 30 00:55:43.661869 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Apr 30 00:55:43.665226 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 00:55:43.666055 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 00:55:43.677603 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 00:55:43.701151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 00:55:43.713702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 00:55:43.726361 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Apr 30 00:55:43.726383 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Apr 30 00:55:43.730356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 00:55:44.072263 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 00:55:44.080701 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 00:55:44.101979 systemd-udevd[1230]: Using default interface naming scheme 'v255'. Apr 30 00:55:44.114098 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 00:55:44.124780 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 00:55:44.139749 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 00:55:44.141711 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Apr 30 00:55:44.169478 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1241) Apr 30 00:55:44.192470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 00:55:44.212279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 00:55:44.232781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 00:55:44.251538 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 00:55:44.261645 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 00:55:44.277064 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 00:55:44.277600 systemd-networkd[1238]: lo: Link UP Apr 30 00:55:44.277604 systemd-networkd[1238]: lo: Gained carrier Apr 30 00:55:44.278348 systemd-networkd[1238]: Enumeration completed Apr 30 00:55:44.278667 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 00:55:44.279587 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:44.279595 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 00:55:44.280327 systemd-networkd[1238]: eth0: Link UP Apr 30 00:55:44.280331 systemd-networkd[1238]: eth0: Gained carrier Apr 30 00:55:44.280344 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 00:55:44.288135 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:55:44.295708 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 00:55:44.307510 systemd-networkd[1238]: eth0: DHCPv4 address 10.0.0.142/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 30 00:55:44.333032 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 00:55:44.334647 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 00:55:44.349675 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 00:55:44.353396 lvm[1276]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 00:55:44.394049 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 00:55:44.395733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 00:55:44.397177 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 00:55:44.397222 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 00:55:44.398357 systemd[1]: Reached target machines.target - Containers. Apr 30 00:55:44.400997 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 30 00:55:44.415646 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 00:55:44.418203 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 00:55:44.419554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:44.420593 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 00:55:44.423635 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 30 00:55:44.426613 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 00:55:44.430749 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 00:55:44.438244 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 00:55:44.446572 kernel: loop0: detected capacity change from 0 to 114432 Apr 30 00:55:44.449016 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 00:55:44.449806 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 30 00:55:44.463486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 00:55:44.510483 kernel: loop1: detected capacity change from 0 to 114328 Apr 30 00:55:44.551503 kernel: loop2: detected capacity change from 0 to 194096 Apr 30 00:55:44.595469 kernel: loop3: detected capacity change from 0 to 114432 Apr 30 00:55:44.600494 kernel: loop4: detected capacity change from 0 to 114328 Apr 30 00:55:44.605463 kernel: loop5: detected capacity change from 0 to 194096 Apr 30 00:55:44.611231 (sd-merge)[1297]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 30 00:55:44.613397 (sd-merge)[1297]: Merged extensions into '/usr'. Apr 30 00:55:44.617620 systemd[1]: Reloading requested from client PID 1284 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 00:55:44.617877 systemd[1]: Reloading... Apr 30 00:55:44.664512 zram_generator::config[1324]: No configuration found. Apr 30 00:55:44.681789 ldconfig[1280]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 00:55:44.769793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:55:44.814932 systemd[1]: Reloading finished in 196 ms. Apr 30 00:55:44.835746 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 00:55:44.837658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 00:55:44.854605 systemd[1]: Starting ensure-sysext.service... Apr 30 00:55:44.856483 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 00:55:44.861777 systemd[1]: Reloading requested from client PID 1366 ('systemctl') (unit ensure-sysext.service)... Apr 30 00:55:44.861793 systemd[1]: Reloading... Apr 30 00:55:44.873373 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 00:55:44.873672 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 00:55:44.874293 systemd-tmpfiles[1367]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 00:55:44.874539 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Apr 30 00:55:44.874594 systemd-tmpfiles[1367]: ACLs are not supported, ignoring. Apr 30 00:55:44.876868 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:55:44.876880 systemd-tmpfiles[1367]: Skipping /boot Apr 30 00:55:44.883717 systemd-tmpfiles[1367]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 00:55:44.883732 systemd-tmpfiles[1367]: Skipping /boot Apr 30 00:55:44.907555 zram_generator::config[1399]: No configuration found. Apr 30 00:55:44.990332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:55:45.035297 systemd[1]: Reloading finished in 173 ms. Apr 30 00:55:45.057564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 00:55:45.079158 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:55:45.082047 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 00:55:45.084629 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 00:55:45.087726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 00:55:45.093756 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 00:55:45.096931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:45.100878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:45.105715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:45.108365 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:45.112151 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:45.113114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:45.113296 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:45.116902 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:45.117062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:45.120457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:45.120840 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:45.128346 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:45.134792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:45.137915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:45.143244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:45.144462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:45.146519 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 00:55:45.148467 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 00:55:45.150406 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:45.150611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:45.152625 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:45.152771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:45.154588 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:45.154800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:45.156250 augenrules[1476]: No rules Apr 30 00:55:45.161477 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:55:45.165223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 00:55:45.176490 systemd-resolved[1443]: Positive Trust Anchors: Apr 30 00:55:45.178218 systemd-resolved[1443]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 00:55:45.178251 systemd-resolved[1443]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 00:55:45.181764 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 00:55:45.184047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 00:55:45.185713 systemd-resolved[1443]: Defaulting to hostname 'linux'. Apr 30 00:55:45.188697 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 00:55:45.192512 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 00:55:45.193715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 00:55:45.195190 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 00:55:45.197602 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 00:55:45.209940 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 00:55:45.211907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 00:55:45.212074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 00:55:45.213712 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 00:55:45.213862 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 00:55:45.215323 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 00:55:45.215500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 00:55:45.217239 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 00:55:45.217517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 00:55:45.219272 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 00:55:45.222492 systemd[1]: Finished ensure-sysext.service. Apr 30 00:55:45.228042 systemd[1]: Reached target network.target - Network. Apr 30 00:55:45.229022 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 00:55:45.230200 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 00:55:45.230274 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 00:55:45.238648 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 00:55:45.240054 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 00:55:45.280872 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 00:55:45.281764 systemd-timesyncd[1510]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 30 00:55:45.281816 systemd-timesyncd[1510]: Initial clock synchronization to Wed 2025-04-30 00:55:45.038825 UTC. Apr 30 00:55:45.282595 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 00:55:45.283781 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 00:55:45.285069 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 00:55:45.286401 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 00:55:45.287742 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 00:55:45.287777 systemd[1]: Reached target paths.target - Path Units. Apr 30 00:55:45.288722 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 00:55:45.289959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 00:55:45.291157 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 00:55:45.292460 systemd[1]: Reached target timers.target - Timer Units. Apr 30 00:55:45.294279 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 00:55:45.296825 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 00:55:45.298997 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 00:55:45.305388 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 00:55:45.306582 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 00:55:45.307577 systemd[1]: Reached target basic.target - Basic System. Apr 30 00:55:45.308739 systemd[1]: System is tainted: cgroupsv1 Apr 30 00:55:45.308787 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:55:45.308807 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 00:55:45.309937 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 00:55:45.312030 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 00:55:45.314065 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 00:55:45.318628 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 00:55:45.319753 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 00:55:45.322060 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 00:55:45.330567 jq[1516]: false Apr 30 00:55:45.327993 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 30 00:55:45.332688 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 00:55:45.335220 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 00:55:45.344130 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 00:55:45.356980 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 00:55:45.362911 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 00:55:45.365873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 00:55:45.371208 dbus-daemon[1515]: [system] SELinux support is enabled Apr 30 00:55:45.374737 jq[1537]: true Apr 30 00:55:45.372934 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 00:55:45.377704 extend-filesystems[1518]: Found loop3 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found loop4 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found loop5 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda1 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda2 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda3 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found usr Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda4 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda6 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda7 Apr 30 00:55:45.380417 extend-filesystems[1518]: Found vda9 Apr 30 00:55:45.380417 extend-filesystems[1518]: Checking size of /dev/vda9 Apr 30 00:55:45.384953 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 00:55:45.385228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 00:55:45.388374 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 00:55:45.388635 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 00:55:45.400829 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 00:55:45.401081 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 00:55:45.405494 extend-filesystems[1518]: Resized partition /dev/vda9 Apr 30 00:55:45.411429 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 00:55:45.412386 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 00:55:45.414163 jq[1543]: true Apr 30 00:55:45.414646 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 00:55:45.414842 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 00:55:45.416329 extend-filesystems[1556]: resize2fs 1.47.1 (20-May-2024) Apr 30 00:55:45.417879 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 00:55:45.432727 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1248) Apr 30 00:55:45.432799 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 30 00:55:45.432836 tar[1542]: linux-arm64/helm Apr 30 00:55:45.468172 update_engine[1535]: I20250430 00:55:45.466257 1535 main.cc:92] Flatcar Update Engine starting Apr 30 00:55:45.475912 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (Power Button) Apr 30 00:55:45.476100 systemd-logind[1528]: New seat seat0. Apr 30 00:55:45.476760 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 00:55:45.480149 systemd[1]: Started update-engine.service - Update Engine. Apr 30 00:55:45.481306 update_engine[1535]: I20250430 00:55:45.480257 1535 update_check_scheduler.cc:74] Next update check in 7m22s Apr 30 00:55:45.482968 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 00:55:45.495374 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 00:55:45.499468 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 30 00:55:45.532354 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 00:55:45.532354 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 30 00:55:45.532354 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 30 00:55:45.536830 extend-filesystems[1518]: Resized filesystem in /dev/vda9 Apr 30 00:55:45.537422 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 00:55:45.537683 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 00:55:45.539964 bash[1576]: Updated "/home/core/.ssh/authorized_keys" Apr 30 00:55:45.541353 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 00:55:45.544851 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 30 00:55:45.555714 locksmithd[1577]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 00:55:45.673502 containerd[1554]: time="2025-04-30T00:55:45.672165760Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 30 00:55:45.702457 containerd[1554]: time="2025-04-30T00:55:45.702191440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.703858 containerd[1554]: time="2025-04-30T00:55:45.703812960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:45.703858 containerd[1554]: time="2025-04-30T00:55:45.703853800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 00:55:45.703940 containerd[1554]: time="2025-04-30T00:55:45.703871760Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 00:55:45.704061 containerd[1554]: time="2025-04-30T00:55:45.704039000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 00:55:45.704089 containerd[1554]: time="2025-04-30T00:55:45.704063400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704148 containerd[1554]: time="2025-04-30T00:55:45.704131840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704169 containerd[1554]: time="2025-04-30T00:55:45.704150800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704394 containerd[1554]: time="2025-04-30T00:55:45.704363680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704394 containerd[1554]: time="2025-04-30T00:55:45.704385160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704459 containerd[1554]: time="2025-04-30T00:55:45.704403600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704459 containerd[1554]: time="2025-04-30T00:55:45.704424960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704562 containerd[1554]: time="2025-04-30T00:55:45.704543520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704789 containerd[1554]: time="2025-04-30T00:55:45.704761160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704937 containerd[1554]: time="2025-04-30T00:55:45.704916120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 00:55:45.704963 containerd[1554]: time="2025-04-30T00:55:45.704936240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 00:55:45.705028 containerd[1554]: time="2025-04-30T00:55:45.705014000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 00:55:45.705072 containerd[1554]: time="2025-04-30T00:55:45.705060480Z" level=info msg="metadata content store policy set" policy=shared Apr 30 00:55:45.708936 containerd[1554]: time="2025-04-30T00:55:45.708904000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 00:55:45.709042 containerd[1554]: time="2025-04-30T00:55:45.709022680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 00:55:45.709073 containerd[1554]: time="2025-04-30T00:55:45.709047240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 00:55:45.709111 containerd[1554]: time="2025-04-30T00:55:45.709076200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 00:55:45.709111 containerd[1554]: time="2025-04-30T00:55:45.709103760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 00:55:45.709295 containerd[1554]: time="2025-04-30T00:55:45.709276680Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 00:55:45.711156 containerd[1554]: time="2025-04-30T00:55:45.711122800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711610840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711637200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711651000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711676680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711693240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711721920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711740760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711757520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711769760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711782200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711796160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711816280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711830360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.711827 containerd[1554]: time="2025-04-30T00:55:45.711842320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711854200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711866280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711878920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711891440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711904200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711916960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711931560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711943560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711955080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711966440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.711982040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.712004960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.712019840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712130 containerd[1554]: time="2025-04-30T00:55:45.712030640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712147200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712166840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712177720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712189280Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712198720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712213440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712351040Z" level=info msg="NRI interface is disabled by configuration." Apr 30 00:55:45.712597 containerd[1554]: time="2025-04-30T00:55:45.712370480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 00:55:45.712897 containerd[1554]: time="2025-04-30T00:55:45.712829840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 00:55:45.713006 containerd[1554]: time="2025-04-30T00:55:45.712942800Z" level=info msg="Connect containerd service" Apr 30 00:55:45.713006 containerd[1554]: time="2025-04-30T00:55:45.712976760Z" level=info msg="using legacy CRI server" Apr 30 00:55:45.713006 containerd[1554]: time="2025-04-30T00:55:45.712985320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 00:55:45.713141 containerd[1554]: time="2025-04-30T00:55:45.713119280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 00:55:45.714194 containerd[1554]: time="2025-04-30T00:55:45.714161480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714402320Z" level=info msg="Start subscribing containerd event" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714480520Z" level=info msg="Start recovering state" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714559640Z" level=info msg="Start event monitor" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714573080Z" level=info msg="Start snapshots syncer" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714582240Z" level=info msg="Start cni network conf syncer for default" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714590920Z" level=info msg="Start streaming server" Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714775080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714818280Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 00:55:45.716147 containerd[1554]: time="2025-04-30T00:55:45.714869360Z" level=info msg="containerd successfully booted in 0.044080s" Apr 30 00:55:45.715001 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 00:55:45.739608 sshd_keygen[1536]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 00:55:45.759012 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 00:55:45.781121 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 00:55:45.788734 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 00:55:45.788985 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 00:55:45.801003 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 00:55:45.812813 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 00:55:45.822046 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 00:55:45.824882 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 30 00:55:45.826647 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 00:55:45.837303 tar[1542]: linux-arm64/LICENSE Apr 30 00:55:45.837376 tar[1542]: linux-arm64/README.md Apr 30 00:55:45.845879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 30 00:55:46.262630 systemd-networkd[1238]: eth0: Gained IPv6LL Apr 30 00:55:46.265039 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 00:55:46.267057 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 00:55:46.284739 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 30 00:55:46.287211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:55:46.289618 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 00:55:46.310992 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 30 00:55:46.311280 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 30 00:55:46.313150 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 00:55:46.322528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 00:55:46.798554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:55:46.800141 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 00:55:46.802074 (kubelet)[1653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:55:46.802383 systemd[1]: Startup finished in 5.874s (kernel) + 3.911s (userspace) = 9.786s. Apr 30 00:55:47.303373 kubelet[1653]: E0430 00:55:47.303325 1653 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:55:47.305582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:55:47.305778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:55:50.738347 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 00:55:50.749684 systemd[1]: Started sshd@0-10.0.0.142:22-10.0.0.1:57474.service - OpenSSH per-connection server daemon (10.0.0.1:57474). Apr 30 00:55:50.810733 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 57474 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:50.812571 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:50.828002 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 00:55:50.838691 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 00:55:50.841211 systemd-logind[1528]: New session 1 of user core. Apr 30 00:55:50.848604 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 00:55:50.851195 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 00:55:50.858087 (systemd)[1673]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 00:55:50.930489 systemd[1673]: Queued start job for default target default.target. Apr 30 00:55:50.930865 systemd[1673]: Created slice app.slice - User Application Slice. Apr 30 00:55:50.930889 systemd[1673]: Reached target paths.target - Paths. Apr 30 00:55:50.930900 systemd[1673]: Reached target timers.target - Timers. Apr 30 00:55:50.949577 systemd[1673]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 00:55:50.955969 systemd[1673]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 00:55:50.956037 systemd[1673]: Reached target sockets.target - Sockets. Apr 30 00:55:50.956049 systemd[1673]: Reached target basic.target - Basic System. Apr 30 00:55:50.956094 systemd[1673]: Reached target default.target - Main User Target. Apr 30 00:55:50.956118 systemd[1673]: Startup finished in 92ms. Apr 30 00:55:50.956463 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 00:55:50.958232 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 00:55:51.016917 systemd[1]: Started sshd@1-10.0.0.142:22-10.0.0.1:57486.service - OpenSSH per-connection server daemon (10.0.0.1:57486). Apr 30 00:55:51.052358 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 57486 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:51.053842 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:51.058476 systemd-logind[1528]: New session 2 of user core. Apr 30 00:55:51.070775 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 00:55:51.124334 sshd[1685]: pam_unix(sshd:session): session closed for user core Apr 30 00:55:51.132797 systemd[1]: Started sshd@2-10.0.0.142:22-10.0.0.1:57496.service - OpenSSH per-connection server daemon (10.0.0.1:57496). Apr 30 00:55:51.133661 systemd[1]: sshd@1-10.0.0.142:22-10.0.0.1:57486.service: Deactivated successfully. Apr 30 00:55:51.135049 systemd[1]: session-2.scope: Deactivated successfully. Apr 30 00:55:51.135774 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Apr 30 00:55:51.137050 systemd-logind[1528]: Removed session 2. Apr 30 00:55:51.173200 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 57496 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:51.174640 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:51.180438 systemd-logind[1528]: New session 3 of user core. Apr 30 00:55:51.190737 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 00:55:51.239149 sshd[1690]: pam_unix(sshd:session): session closed for user core Apr 30 00:55:51.248781 systemd[1]: Started sshd@3-10.0.0.142:22-10.0.0.1:57512.service - OpenSSH per-connection server daemon (10.0.0.1:57512). Apr 30 00:55:51.249585 systemd[1]: sshd@2-10.0.0.142:22-10.0.0.1:57496.service: Deactivated successfully. Apr 30 00:55:51.251165 systemd[1]: session-3.scope: Deactivated successfully. Apr 30 00:55:51.251752 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Apr 30 00:55:51.253054 systemd-logind[1528]: Removed session 3. Apr 30 00:55:51.284366 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 57512 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:51.285708 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:51.289777 systemd-logind[1528]: New session 4 of user core. Apr 30 00:55:51.301767 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 00:55:51.354736 sshd[1698]: pam_unix(sshd:session): session closed for user core Apr 30 00:55:51.362775 systemd[1]: Started sshd@4-10.0.0.142:22-10.0.0.1:57524.service - OpenSSH per-connection server daemon (10.0.0.1:57524). Apr 30 00:55:51.363182 systemd[1]: sshd@3-10.0.0.142:22-10.0.0.1:57512.service: Deactivated successfully. Apr 30 00:55:51.365069 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Apr 30 00:55:51.365722 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 00:55:51.367021 systemd-logind[1528]: Removed session 4. Apr 30 00:55:51.401226 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 57524 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:51.402563 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:51.406533 systemd-logind[1528]: New session 5 of user core. Apr 30 00:55:51.415749 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 00:55:51.486611 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 00:55:51.486895 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:55:51.500269 sudo[1713]: pam_unix(sudo:session): session closed for user root Apr 30 00:55:51.503093 sshd[1706]: pam_unix(sshd:session): session closed for user core Apr 30 00:55:51.515731 systemd[1]: Started sshd@5-10.0.0.142:22-10.0.0.1:57526.service - OpenSSH per-connection server daemon (10.0.0.1:57526). Apr 30 00:55:51.516123 systemd[1]: sshd@4-10.0.0.142:22-10.0.0.1:57524.service: Deactivated successfully. Apr 30 00:55:51.517928 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Apr 30 00:55:51.518533 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 00:55:51.520264 systemd-logind[1528]: Removed session 5. Apr 30 00:55:51.551241 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 57526 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:51.553038 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:51.557133 systemd-logind[1528]: New session 6 of user core. Apr 30 00:55:51.567712 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 00:55:51.619547 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 00:55:51.620173 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:55:51.623229 sudo[1723]: pam_unix(sudo:session): session closed for user root Apr 30 00:55:51.628180 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 30 00:55:51.628463 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:55:51.656777 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 30 00:55:51.658221 auditctl[1726]: No rules Apr 30 00:55:51.659111 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 00:55:51.659361 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 30 00:55:51.661212 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 30 00:55:51.685222 augenrules[1745]: No rules Apr 30 00:55:51.686548 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 30 00:55:51.688691 sudo[1722]: pam_unix(sudo:session): session closed for user root Apr 30 00:55:51.690358 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 30 00:55:51.703742 systemd[1]: Started sshd@6-10.0.0.142:22-10.0.0.1:57530.service - OpenSSH per-connection server daemon (10.0.0.1:57530). Apr 30 00:55:51.704220 systemd[1]: sshd@5-10.0.0.142:22-10.0.0.1:57526.service: Deactivated successfully. Apr 30 00:55:51.705779 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 00:55:51.707010 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Apr 30 00:55:51.707988 systemd-logind[1528]: Removed session 6. Apr 30 00:55:51.742325 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 57530 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:55:51.743711 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:55:51.748479 systemd-logind[1528]: New session 7 of user core. Apr 30 00:55:51.762887 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 00:55:51.814918 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 00:55:51.815221 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 00:55:52.136042 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 30 00:55:52.136203 (dockerd)[1776]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 30 00:55:52.424269 dockerd[1776]: time="2025-04-30T00:55:52.424129836Z" level=info msg="Starting up" Apr 30 00:55:52.499647 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport561990812-merged.mount: Deactivated successfully. Apr 30 00:55:52.680331 dockerd[1776]: time="2025-04-30T00:55:52.680219710Z" level=info msg="Loading containers: start." Apr 30 00:55:52.783467 kernel: Initializing XFRM netlink socket Apr 30 00:55:52.857768 systemd-networkd[1238]: docker0: Link UP Apr 30 00:55:52.874856 dockerd[1776]: time="2025-04-30T00:55:52.874798569Z" level=info msg="Loading containers: done." Apr 30 00:55:52.891131 dockerd[1776]: time="2025-04-30T00:55:52.891066883Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 30 00:55:52.891310 dockerd[1776]: time="2025-04-30T00:55:52.891212511Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 30 00:55:52.891384 dockerd[1776]: time="2025-04-30T00:55:52.891317699Z" level=info msg="Daemon has completed initialization" Apr 30 00:55:52.917924 dockerd[1776]: time="2025-04-30T00:55:52.917784045Z" level=info msg="API listen on /run/docker.sock" Apr 30 00:55:52.918020 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 30 00:55:53.496884 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1782010777-merged.mount: Deactivated successfully. Apr 30 00:55:53.690454 containerd[1554]: time="2025-04-30T00:55:53.690396465Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" Apr 30 00:55:54.348480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975017116.mount: Deactivated successfully. Apr 30 00:55:55.674458 containerd[1554]: time="2025-04-30T00:55:55.674277089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:55.674900 containerd[1554]: time="2025-04-30T00:55:55.674653027Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=29794152" Apr 30 00:55:55.675342 containerd[1554]: time="2025-04-30T00:55:55.675320479Z" level=info msg="ImageCreate event name:\"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:55.679119 containerd[1554]: time="2025-04-30T00:55:55.679067805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:55.680108 containerd[1554]: time="2025-04-30T00:55:55.680053843Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"29790950\" in 1.989612098s" Apr 30 00:55:55.680108 containerd[1554]: time="2025-04-30T00:55:55.680091813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:afbe230ec4abc2c9e87f7fbe7814bde21dbe30f03252c8861c4ca9510cb43ec6\"" Apr 30 00:55:55.698508 containerd[1554]: time="2025-04-30T00:55:55.698468554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" Apr 30 00:55:57.091709 containerd[1554]: time="2025-04-30T00:55:57.091653729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:57.092519 containerd[1554]: time="2025-04-30T00:55:57.092479796Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=26855552" Apr 30 00:55:57.093124 containerd[1554]: time="2025-04-30T00:55:57.093089824Z" level=info msg="ImageCreate event name:\"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:57.096283 containerd[1554]: time="2025-04-30T00:55:57.096204189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:57.097389 containerd[1554]: time="2025-04-30T00:55:57.097361287Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"28297111\" in 1.398853631s" Apr 30 00:55:57.097459 containerd[1554]: time="2025-04-30T00:55:57.097395368Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:3df23260c56ff58d759f8a841c67846184e97ce81a269549ca8d14b36da14c14\"" Apr 30 00:55:57.116475 containerd[1554]: time="2025-04-30T00:55:57.116271345Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" Apr 30 00:55:57.556004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 00:55:57.566629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:55:57.667148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:55:57.669473 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:55:57.720275 kubelet[2014]: E0430 00:55:57.720218 2014 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:55:57.722986 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:55:57.723174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:55:58.260543 containerd[1554]: time="2025-04-30T00:55:58.260480183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:58.261918 containerd[1554]: time="2025-04-30T00:55:58.261656825Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=16263947" Apr 30 00:55:58.262647 containerd[1554]: time="2025-04-30T00:55:58.262605708Z" level=info msg="ImageCreate event name:\"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:58.267468 containerd[1554]: time="2025-04-30T00:55:58.266244166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:58.268647 containerd[1554]: time="2025-04-30T00:55:58.267401566Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"17705524\" in 1.15106364s" Apr 30 00:55:58.268647 containerd[1554]: time="2025-04-30T00:55:58.268114500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:fb0f5dac5fa74463b801d11598454c00462609b582d17052195012e5f682c2ba\"" Apr 30 00:55:58.288156 containerd[1554]: time="2025-04-30T00:55:58.288123055Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" Apr 30 00:55:59.355244 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241135631.mount: Deactivated successfully. Apr 30 00:55:59.668945 containerd[1554]: time="2025-04-30T00:55:59.668826702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:59.669891 containerd[1554]: time="2025-04-30T00:55:59.669589559Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" Apr 30 00:55:59.670686 containerd[1554]: time="2025-04-30T00:55:59.670653596Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:59.673480 containerd[1554]: time="2025-04-30T00:55:59.673297934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:55:59.673839 containerd[1554]: time="2025-04-30T00:55:59.673815232Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.385653406s" Apr 30 00:55:59.673885 containerd[1554]: time="2025-04-30T00:55:59.673845867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" Apr 30 00:55:59.692368 containerd[1554]: time="2025-04-30T00:55:59.692331959Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 30 00:56:00.304019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424122466.mount: Deactivated successfully. Apr 30 00:56:00.844088 containerd[1554]: time="2025-04-30T00:56:00.844042569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:00.844955 containerd[1554]: time="2025-04-30T00:56:00.844590034Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Apr 30 00:56:00.845838 containerd[1554]: time="2025-04-30T00:56:00.845780555Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:00.849031 containerd[1554]: time="2025-04-30T00:56:00.848969437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:00.850409 containerd[1554]: time="2025-04-30T00:56:00.850369732Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.157998338s" Apr 30 00:56:00.850498 containerd[1554]: time="2025-04-30T00:56:00.850414442Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 30 00:56:00.870589 containerd[1554]: time="2025-04-30T00:56:00.870358484Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 30 00:56:01.644604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount450831133.mount: Deactivated successfully. Apr 30 00:56:01.652832 containerd[1554]: time="2025-04-30T00:56:01.652773306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:01.656180 containerd[1554]: time="2025-04-30T00:56:01.656141095Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Apr 30 00:56:01.659691 containerd[1554]: time="2025-04-30T00:56:01.659643331Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:01.665919 containerd[1554]: time="2025-04-30T00:56:01.665877332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:01.666446 containerd[1554]: time="2025-04-30T00:56:01.666404763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 796.009925ms" Apr 30 00:56:01.666486 containerd[1554]: time="2025-04-30T00:56:01.666463282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 30 00:56:01.684983 containerd[1554]: time="2025-04-30T00:56:01.684739557Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 30 00:56:02.225826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301089911.mount: Deactivated successfully. Apr 30 00:56:04.170969 containerd[1554]: time="2025-04-30T00:56:04.170910709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:04.172768 containerd[1554]: time="2025-04-30T00:56:04.172708203Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Apr 30 00:56:04.174564 containerd[1554]: time="2025-04-30T00:56:04.173824053Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:04.176781 containerd[1554]: time="2025-04-30T00:56:04.176745096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:04.178076 containerd[1554]: time="2025-04-30T00:56:04.178041768Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.493262123s" Apr 30 00:56:04.178124 containerd[1554]: time="2025-04-30T00:56:04.178075555Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 30 00:56:07.973486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 00:56:07.983656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:08.135640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:08.139760 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 00:56:08.175869 kubelet[2245]: E0430 00:56:08.175797 2245 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 00:56:08.178401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 00:56:08.178591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 00:56:09.317552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:09.328710 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:09.348816 systemd[1]: Reloading requested from client PID 2262 ('systemctl') (unit session-7.scope)... Apr 30 00:56:09.348832 systemd[1]: Reloading... Apr 30 00:56:09.406477 zram_generator::config[2301]: No configuration found. Apr 30 00:56:09.503924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:56:09.556492 systemd[1]: Reloading finished in 207 ms. Apr 30 00:56:09.588408 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 30 00:56:09.588617 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 30 00:56:09.588881 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:09.591131 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:09.679038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:09.682846 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:56:09.721671 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:56:09.721671 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:56:09.721671 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:56:09.722025 kubelet[2359]: I0430 00:56:09.721778 2359 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:56:10.209904 kubelet[2359]: I0430 00:56:10.209860 2359 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:56:10.209904 kubelet[2359]: I0430 00:56:10.209892 2359 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:56:10.210127 kubelet[2359]: I0430 00:56:10.210112 2359 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:56:10.234697 kubelet[2359]: E0430 00:56:10.234636 2359 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.234697 kubelet[2359]: I0430 00:56:10.234679 2359 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:56:10.247164 kubelet[2359]: I0430 00:56:10.247136 2359 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:56:10.247692 kubelet[2359]: I0430 00:56:10.247659 2359 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:56:10.247877 kubelet[2359]: I0430 00:56:10.247688 2359 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:56:10.247969 kubelet[2359]: I0430 00:56:10.247933 2359 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:56:10.247969 kubelet[2359]: I0430 00:56:10.247943 2359 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:56:10.248223 kubelet[2359]: I0430 00:56:10.248195 2359 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:56:10.250862 kubelet[2359]: I0430 00:56:10.250823 2359 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:56:10.250862 kubelet[2359]: I0430 00:56:10.250847 2359 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:56:10.250983 kubelet[2359]: I0430 00:56:10.250975 2359 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:56:10.251859 kubelet[2359]: I0430 00:56:10.251067 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:56:10.252326 kubelet[2359]: W0430 00:56:10.252148 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.252326 kubelet[2359]: E0430 00:56:10.252207 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.252326 kubelet[2359]: W0430 00:56:10.252149 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.252326 kubelet[2359]: E0430 00:56:10.252234 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.252326 kubelet[2359]: I0430 00:56:10.252239 2359 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:56:10.252647 kubelet[2359]: I0430 00:56:10.252619 2359 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:56:10.252744 kubelet[2359]: W0430 00:56:10.252731 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 00:56:10.255726 kubelet[2359]: I0430 00:56:10.255463 2359 server.go:1264] "Started kubelet" Apr 30 00:56:10.256926 kubelet[2359]: I0430 00:56:10.256901 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:56:10.265241 kubelet[2359]: I0430 00:56:10.263093 2359 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:56:10.265241 kubelet[2359]: I0430 00:56:10.264405 2359 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:56:10.269606 kubelet[2359]: I0430 00:56:10.269530 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:56:10.269951 kubelet[2359]: I0430 00:56:10.269934 2359 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:56:10.270222 kubelet[2359]: E0430 00:56:10.269872 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.142:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.142:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183af29efef09e49 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-30 00:56:10.255425097 +0000 UTC m=+0.569520256,LastTimestamp:2025-04-30 00:56:10.255425097 +0000 UTC m=+0.569520256,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 30 00:56:10.270222 kubelet[2359]: I0430 00:56:10.270201 2359 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:56:10.271420 kubelet[2359]: E0430 00:56:10.270956 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="200ms" Apr 30 00:56:10.271420 kubelet[2359]: W0430 00:56:10.271341 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.271420 kubelet[2359]: E0430 00:56:10.271389 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.271563 kubelet[2359]: I0430 00:56:10.270094 2359 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:56:10.272613 kubelet[2359]: I0430 00:56:10.272593 2359 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:56:10.273776 kubelet[2359]: I0430 00:56:10.273753 2359 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:56:10.274065 kubelet[2359]: I0430 00:56:10.274043 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:56:10.276626 kubelet[2359]: I0430 00:56:10.276607 2359 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:56:10.277843 kubelet[2359]: E0430 00:56:10.277817 2359 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:56:10.282414 kubelet[2359]: I0430 00:56:10.282372 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:56:10.283516 kubelet[2359]: I0430 00:56:10.283487 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:56:10.283516 kubelet[2359]: I0430 00:56:10.283522 2359 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:56:10.283669 kubelet[2359]: I0430 00:56:10.283538 2359 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:56:10.283669 kubelet[2359]: E0430 00:56:10.283583 2359 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:56:10.284225 kubelet[2359]: W0430 00:56:10.284158 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.284225 kubelet[2359]: E0430 00:56:10.284202 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:10.300506 kubelet[2359]: I0430 00:56:10.300480 2359 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:56:10.300506 kubelet[2359]: I0430 00:56:10.300513 2359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:56:10.300657 kubelet[2359]: I0430 00:56:10.300535 2359 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:56:10.372619 kubelet[2359]: I0430 00:56:10.372574 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:56:10.372996 kubelet[2359]: E0430 00:56:10.372959 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 30 00:56:10.384203 kubelet[2359]: E0430 00:56:10.384171 2359 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 30 00:56:10.427286 kubelet[2359]: I0430 00:56:10.427252 2359 policy_none.go:49] "None policy: Start" Apr 30 00:56:10.428117 kubelet[2359]: I0430 00:56:10.428005 2359 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:56:10.428268 kubelet[2359]: I0430 00:56:10.428204 2359 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:56:10.432806 kubelet[2359]: I0430 00:56:10.432262 2359 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:56:10.432806 kubelet[2359]: I0430 00:56:10.432462 2359 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:56:10.432806 kubelet[2359]: I0430 00:56:10.432592 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:56:10.433859 kubelet[2359]: E0430 00:56:10.433839 2359 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 30 00:56:10.471663 kubelet[2359]: E0430 00:56:10.471546 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="400ms" Apr 30 00:56:10.575085 kubelet[2359]: I0430 00:56:10.575062 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:56:10.575419 kubelet[2359]: E0430 00:56:10.575393 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 30 00:56:10.584557 kubelet[2359]: I0430 00:56:10.584516 2359 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:56:10.585542 kubelet[2359]: I0430 00:56:10.585510 2359 topology_manager.go:215] "Topology Admit Handler" podUID="8858a45a9f67039427f20f3011918595" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:56:10.586837 kubelet[2359]: I0430 00:56:10.586519 2359 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:56:10.674416 kubelet[2359]: I0430 00:56:10.674364 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:56:10.674416 kubelet[2359]: I0430 00:56:10.674400 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8858a45a9f67039427f20f3011918595-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8858a45a9f67039427f20f3011918595\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:10.674416 kubelet[2359]: I0430 00:56:10.674421 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:10.674416 kubelet[2359]: I0430 00:56:10.674438 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8858a45a9f67039427f20f3011918595-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8858a45a9f67039427f20f3011918595\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:10.674640 kubelet[2359]: I0430 00:56:10.674480 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8858a45a9f67039427f20f3011918595-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8858a45a9f67039427f20f3011918595\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:10.674640 kubelet[2359]: I0430 00:56:10.674497 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:10.674640 kubelet[2359]: I0430 00:56:10.674515 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:10.674640 kubelet[2359]: I0430 00:56:10.674530 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:10.674640 kubelet[2359]: I0430 00:56:10.674547 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:10.872929 kubelet[2359]: E0430 00:56:10.872794 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="800ms" Apr 30 00:56:10.890785 kubelet[2359]: E0430 00:56:10.890365 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:10.891083 kubelet[2359]: E0430 00:56:10.891055 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:10.891493 containerd[1554]: time="2025-04-30T00:56:10.891431872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:10.892190 containerd[1554]: time="2025-04-30T00:56:10.891462234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:10.892247 kubelet[2359]: E0430 00:56:10.891983 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:10.892300 containerd[1554]: time="2025-04-30T00:56:10.892247746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8858a45a9f67039427f20f3011918595,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:10.977057 kubelet[2359]: I0430 00:56:10.977016 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:56:10.977384 kubelet[2359]: E0430 00:56:10.977338 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 30 00:56:11.192923 kubelet[2359]: W0430 00:56:11.192780 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.192923 kubelet[2359]: E0430 00:56:11.192848 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.216154 kubelet[2359]: W0430 00:56:11.216112 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.216242 kubelet[2359]: E0430 00:56:11.216160 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.426248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2804853019.mount: Deactivated successfully. Apr 30 00:56:11.430098 containerd[1554]: time="2025-04-30T00:56:11.430047614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:56:11.431770 containerd[1554]: time="2025-04-30T00:56:11.431732357Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:56:11.432526 containerd[1554]: time="2025-04-30T00:56:11.432459013Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:56:11.433431 containerd[1554]: time="2025-04-30T00:56:11.433396282Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:56:11.434316 containerd[1554]: time="2025-04-30T00:56:11.434283605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Apr 30 00:56:11.434937 containerd[1554]: time="2025-04-30T00:56:11.434904336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:56:11.435053 containerd[1554]: time="2025-04-30T00:56:11.435021170Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 00:56:11.437717 containerd[1554]: time="2025-04-30T00:56:11.437678383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 00:56:11.439378 containerd[1554]: time="2025-04-30T00:56:11.439250368Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.65949ms" Apr 30 00:56:11.440814 containerd[1554]: time="2025-04-30T00:56:11.440784992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.800001ms" Apr 30 00:56:11.442878 containerd[1554]: time="2025-04-30T00:56:11.442741602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.444196ms" Apr 30 00:56:11.586025 containerd[1554]: time="2025-04-30T00:56:11.585001906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:11.586025 containerd[1554]: time="2025-04-30T00:56:11.585067835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:11.586025 containerd[1554]: time="2025-04-30T00:56:11.585087334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:11.586025 containerd[1554]: time="2025-04-30T00:56:11.585185388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587512758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587786822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587809118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587561226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587627794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587640820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:11.587840 containerd[1554]: time="2025-04-30T00:56:11.587745147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:11.588238 containerd[1554]: time="2025-04-30T00:56:11.588128893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:11.642491 containerd[1554]: time="2025-04-30T00:56:11.641381091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c7619c8e4be9aef584271fedd994f55c5ae66b4d9a38bbc864c6878f07c0465\"" Apr 30 00:56:11.644422 kubelet[2359]: E0430 00:56:11.644396 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:11.646709 containerd[1554]: time="2025-04-30T00:56:11.645659755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d79157752370411efa19be85b4db1a5967f97775df460970901c6e85f621033\"" Apr 30 00:56:11.646825 kubelet[2359]: E0430 00:56:11.646153 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:11.647135 containerd[1554]: time="2025-04-30T00:56:11.647093808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8858a45a9f67039427f20f3011918595,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ee4ed794385c23ebb8013a0cdeabe9ac9fc3725c25f33f04ef0de22d232f786\"" Apr 30 00:56:11.648105 kubelet[2359]: E0430 00:56:11.648070 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:11.649276 containerd[1554]: time="2025-04-30T00:56:11.649218397Z" level=info msg="CreateContainer within sandbox \"6c7619c8e4be9aef584271fedd994f55c5ae66b4d9a38bbc864c6878f07c0465\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 30 00:56:11.649974 containerd[1554]: time="2025-04-30T00:56:11.649939219Z" level=info msg="CreateContainer within sandbox \"2d79157752370411efa19be85b4db1a5967f97775df460970901c6e85f621033\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 30 00:56:11.650819 containerd[1554]: time="2025-04-30T00:56:11.650609536Z" level=info msg="CreateContainer within sandbox \"9ee4ed794385c23ebb8013a0cdeabe9ac9fc3725c25f33f04ef0de22d232f786\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 30 00:56:11.665954 containerd[1554]: time="2025-04-30T00:56:11.665912669Z" level=info msg="CreateContainer within sandbox \"6c7619c8e4be9aef584271fedd994f55c5ae66b4d9a38bbc864c6878f07c0465\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38ee9c14c06efd49137e9a2016cc2a5392c23b73a10f9e7208c17df9fc95a702\"" Apr 30 00:56:11.666697 containerd[1554]: time="2025-04-30T00:56:11.666673128Z" level=info msg="StartContainer for \"38ee9c14c06efd49137e9a2016cc2a5392c23b73a10f9e7208c17df9fc95a702\"" Apr 30 00:56:11.670142 containerd[1554]: time="2025-04-30T00:56:11.670100831Z" level=info msg="CreateContainer within sandbox \"9ee4ed794385c23ebb8013a0cdeabe9ac9fc3725c25f33f04ef0de22d232f786\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e5e060fd4c497d94132305c2516e41b9494901cf108636619414e0de533027d\"" Apr 30 00:56:11.670515 containerd[1554]: time="2025-04-30T00:56:11.670486175Z" level=info msg="CreateContainer within sandbox \"2d79157752370411efa19be85b4db1a5967f97775df460970901c6e85f621033\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d284c94e9973eba89a4178d101b3acd8833b3c54ccb70cebd7d3f829840593c\"" Apr 30 00:56:11.670583 containerd[1554]: time="2025-04-30T00:56:11.670546470Z" level=info msg="StartContainer for \"6e5e060fd4c497d94132305c2516e41b9494901cf108636619414e0de533027d\"" Apr 30 00:56:11.670892 containerd[1554]: time="2025-04-30T00:56:11.670865806Z" level=info msg="StartContainer for \"5d284c94e9973eba89a4178d101b3acd8833b3c54ccb70cebd7d3f829840593c\"" Apr 30 00:56:11.673412 kubelet[2359]: E0430 00:56:11.673362 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.142:6443: connect: connection refused" interval="1.6s" Apr 30 00:56:11.765241 kubelet[2359]: W0430 00:56:11.761900 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.765241 kubelet[2359]: E0430 00:56:11.761973 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.779569 kubelet[2359]: I0430 00:56:11.779537 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:56:11.780189 kubelet[2359]: E0430 00:56:11.780161 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.142:6443/api/v1/nodes\": dial tcp 10.0.0.142:6443: connect: connection refused" node="localhost" Apr 30 00:56:11.783222 containerd[1554]: time="2025-04-30T00:56:11.783191361Z" level=info msg="StartContainer for \"38ee9c14c06efd49137e9a2016cc2a5392c23b73a10f9e7208c17df9fc95a702\" returns successfully" Apr 30 00:56:11.783518 containerd[1554]: time="2025-04-30T00:56:11.783390906Z" level=info msg="StartContainer for \"5d284c94e9973eba89a4178d101b3acd8833b3c54ccb70cebd7d3f829840593c\" returns successfully" Apr 30 00:56:11.783518 containerd[1554]: time="2025-04-30T00:56:11.783423870Z" level=info msg="StartContainer for \"6e5e060fd4c497d94132305c2516e41b9494901cf108636619414e0de533027d\" returns successfully" Apr 30 00:56:11.821008 kubelet[2359]: W0430 00:56:11.820952 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:11.821126 kubelet[2359]: E0430 00:56:11.821113 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.142:6443: connect: connection refused Apr 30 00:56:12.292962 kubelet[2359]: E0430 00:56:12.292871 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:12.301299 kubelet[2359]: E0430 00:56:12.300798 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:12.302467 kubelet[2359]: E0430 00:56:12.302428 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:13.304245 kubelet[2359]: E0430 00:56:13.304206 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:13.381826 kubelet[2359]: I0430 00:56:13.381724 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:56:14.107543 kubelet[2359]: E0430 00:56:14.107245 2359 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 30 00:56:14.155313 kubelet[2359]: E0430 00:56:14.154861 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:14.253015 kubelet[2359]: I0430 00:56:14.252977 2359 apiserver.go:52] "Watching apiserver" Apr 30 00:56:14.264079 kubelet[2359]: I0430 00:56:14.264040 2359 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:56:14.271811 kubelet[2359]: I0430 00:56:14.271777 2359 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:56:16.140235 systemd[1]: Reloading requested from client PID 2636 ('systemctl') (unit session-7.scope)... Apr 30 00:56:16.140254 systemd[1]: Reloading... Apr 30 00:56:16.223534 zram_generator::config[2678]: No configuration found. Apr 30 00:56:16.316788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 00:56:16.359017 kubelet[2359]: E0430 00:56:16.358966 2359 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:16.377993 systemd[1]: Reloading finished in 237 ms. Apr 30 00:56:16.408319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:16.419613 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 00:56:16.419930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:16.432705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 00:56:16.525473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 00:56:16.529738 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 00:56:16.594742 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:56:16.594742 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 00:56:16.594742 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 00:56:16.595232 kubelet[2727]: I0430 00:56:16.594789 2727 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 00:56:16.598810 kubelet[2727]: I0430 00:56:16.598777 2727 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 30 00:56:16.598810 kubelet[2727]: I0430 00:56:16.598805 2727 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 00:56:16.599001 kubelet[2727]: I0430 00:56:16.598977 2727 server.go:927] "Client rotation is on, will bootstrap in background" Apr 30 00:56:16.600379 kubelet[2727]: I0430 00:56:16.600346 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 30 00:56:16.601701 kubelet[2727]: I0430 00:56:16.601675 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 00:56:16.607091 kubelet[2727]: I0430 00:56:16.607050 2727 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 00:56:16.607587 kubelet[2727]: I0430 00:56:16.607554 2727 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 00:56:16.607871 kubelet[2727]: I0430 00:56:16.607585 2727 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 30 00:56:16.607951 kubelet[2727]: I0430 00:56:16.607882 2727 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 00:56:16.607951 kubelet[2727]: I0430 00:56:16.607892 2727 container_manager_linux.go:301] "Creating device plugin manager" Apr 30 00:56:16.607951 kubelet[2727]: I0430 00:56:16.607926 2727 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:56:16.608034 kubelet[2727]: I0430 00:56:16.608023 2727 kubelet.go:400] "Attempting to sync node with API server" Apr 30 00:56:16.608068 kubelet[2727]: I0430 00:56:16.608043 2727 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 00:56:16.608099 kubelet[2727]: I0430 00:56:16.608069 2727 kubelet.go:312] "Adding apiserver pod source" Apr 30 00:56:16.608099 kubelet[2727]: I0430 00:56:16.608085 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 00:56:16.612478 kubelet[2727]: I0430 00:56:16.609827 2727 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 30 00:56:16.612478 kubelet[2727]: I0430 00:56:16.609983 2727 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 00:56:16.612478 kubelet[2727]: I0430 00:56:16.610386 2727 server.go:1264] "Started kubelet" Apr 30 00:56:16.612478 kubelet[2727]: I0430 00:56:16.612297 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 00:56:16.618229 kubelet[2727]: I0430 00:56:16.613159 2727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 00:56:16.618229 kubelet[2727]: I0430 00:56:16.614615 2727 server.go:455] "Adding debug handlers to kubelet server" Apr 30 00:56:16.618229 kubelet[2727]: I0430 00:56:16.615701 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 00:56:16.618229 kubelet[2727]: I0430 00:56:16.615897 2727 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 00:56:16.618229 kubelet[2727]: I0430 00:56:16.617286 2727 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 30 00:56:16.621494 kubelet[2727]: I0430 00:56:16.621194 2727 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 30 00:56:16.621494 kubelet[2727]: I0430 00:56:16.621344 2727 reconciler.go:26] "Reconciler: start to sync state" Apr 30 00:56:16.629767 kubelet[2727]: I0430 00:56:16.628818 2727 factory.go:221] Registration of the systemd container factory successfully Apr 30 00:56:16.629767 kubelet[2727]: I0430 00:56:16.628928 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 00:56:16.631495 kubelet[2727]: E0430 00:56:16.631461 2727 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 00:56:16.632536 kubelet[2727]: I0430 00:56:16.632502 2727 factory.go:221] Registration of the containerd container factory successfully Apr 30 00:56:16.636863 kubelet[2727]: I0430 00:56:16.636815 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 00:56:16.637850 kubelet[2727]: I0430 00:56:16.637811 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 00:56:16.637850 kubelet[2727]: I0430 00:56:16.637848 2727 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 00:56:16.637924 kubelet[2727]: I0430 00:56:16.637869 2727 kubelet.go:2337] "Starting kubelet main sync loop" Apr 30 00:56:16.637945 kubelet[2727]: E0430 00:56:16.637914 2727 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 00:56:16.675403 kubelet[2727]: I0430 00:56:16.675309 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 00:56:16.675403 kubelet[2727]: I0430 00:56:16.675328 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 00:56:16.675403 kubelet[2727]: I0430 00:56:16.675351 2727 state_mem.go:36] "Initialized new in-memory state store" Apr 30 00:56:16.675561 kubelet[2727]: I0430 00:56:16.675536 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 30 00:56:16.675586 kubelet[2727]: I0430 00:56:16.675552 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 30 00:56:16.675586 kubelet[2727]: I0430 00:56:16.675571 2727 policy_none.go:49] "None policy: Start" Apr 30 00:56:16.676465 kubelet[2727]: I0430 00:56:16.676260 2727 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 00:56:16.676465 kubelet[2727]: I0430 00:56:16.676281 2727 state_mem.go:35] "Initializing new in-memory state store" Apr 30 00:56:16.676465 kubelet[2727]: I0430 00:56:16.676409 2727 state_mem.go:75] "Updated machine memory state" Apr 30 00:56:16.678038 kubelet[2727]: I0430 00:56:16.677466 2727 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 00:56:16.678038 kubelet[2727]: I0430 00:56:16.677626 2727 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 00:56:16.678038 kubelet[2727]: I0430 00:56:16.677711 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 00:56:16.721377 kubelet[2727]: I0430 00:56:16.721349 2727 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 30 00:56:16.728829 kubelet[2727]: I0430 00:56:16.728691 2727 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 30 00:56:16.728930 kubelet[2727]: I0430 00:56:16.728851 2727 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 30 00:56:16.738158 kubelet[2727]: I0430 00:56:16.738065 2727 topology_manager.go:215] "Topology Admit Handler" podUID="8858a45a9f67039427f20f3011918595" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 30 00:56:16.738294 kubelet[2727]: I0430 00:56:16.738200 2727 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 30 00:56:16.738294 kubelet[2727]: I0430 00:56:16.738242 2727 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 30 00:56:16.744666 kubelet[2727]: E0430 00:56:16.744602 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:16.822467 kubelet[2727]: I0430 00:56:16.822408 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8858a45a9f67039427f20f3011918595-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8858a45a9f67039427f20f3011918595\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:16.923304 kubelet[2727]: I0430 00:56:16.923242 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8858a45a9f67039427f20f3011918595-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8858a45a9f67039427f20f3011918595\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:16.923422 kubelet[2727]: I0430 00:56:16.923361 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:16.923422 kubelet[2727]: I0430 00:56:16.923388 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:16.923422 kubelet[2727]: I0430 00:56:16.923406 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" Apr 30 00:56:16.923543 kubelet[2727]: I0430 00:56:16.923481 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8858a45a9f67039427f20f3011918595-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8858a45a9f67039427f20f3011918595\") " pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:16.923543 kubelet[2727]: I0430 00:56:16.923516 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:16.923543 kubelet[2727]: I0430 00:56:16.923534 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:16.923604 kubelet[2727]: I0430 00:56:16.923551 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" Apr 30 00:56:17.044105 kubelet[2727]: E0430 00:56:17.043970 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:17.046673 kubelet[2727]: E0430 00:56:17.046501 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:17.048723 kubelet[2727]: E0430 00:56:17.046682 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:17.139909 sudo[2761]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 30 00:56:17.140258 sudo[2761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 30 00:56:17.566361 sudo[2761]: pam_unix(sudo:session): session closed for user root Apr 30 00:56:17.608846 kubelet[2727]: I0430 00:56:17.608762 2727 apiserver.go:52] "Watching apiserver" Apr 30 00:56:17.621476 kubelet[2727]: I0430 00:56:17.621431 2727 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 30 00:56:17.653598 kubelet[2727]: E0430 00:56:17.653549 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:17.654581 kubelet[2727]: E0430 00:56:17.654235 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:17.662438 kubelet[2727]: E0430 00:56:17.662203 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 30 00:56:17.663108 kubelet[2727]: E0430 00:56:17.662741 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:17.676697 kubelet[2727]: I0430 00:56:17.675767 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.67572335 podStartE2EDuration="1.67572335s" podCreationTimestamp="2025-04-30 00:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:56:17.675665578 +0000 UTC m=+1.142529312" watchObservedRunningTime="2025-04-30 00:56:17.67572335 +0000 UTC m=+1.142587084" Apr 30 00:56:17.693529 kubelet[2727]: I0430 00:56:17.693465 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.693421672 podStartE2EDuration="1.693421672s" podCreationTimestamp="2025-04-30 00:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:56:17.686496756 +0000 UTC m=+1.153360490" watchObservedRunningTime="2025-04-30 00:56:17.693421672 +0000 UTC m=+1.160285406" Apr 30 00:56:17.702108 kubelet[2727]: I0430 00:56:17.701539 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.701520777 podStartE2EDuration="1.701520777s" podCreationTimestamp="2025-04-30 00:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:56:17.693812122 +0000 UTC m=+1.160675856" watchObservedRunningTime="2025-04-30 00:56:17.701520777 +0000 UTC m=+1.168384511" Apr 30 00:56:18.655218 kubelet[2727]: E0430 00:56:18.655176 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:19.656681 kubelet[2727]: E0430 00:56:19.656639 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:19.750120 sudo[1758]: pam_unix(sudo:session): session closed for user root Apr 30 00:56:19.751986 sshd[1752]: pam_unix(sshd:session): session closed for user core Apr 30 00:56:19.755313 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Apr 30 00:56:19.755995 systemd[1]: sshd@6-10.0.0.142:22-10.0.0.1:57530.service: Deactivated successfully. Apr 30 00:56:19.757894 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 00:56:19.758600 systemd-logind[1528]: Removed session 7. Apr 30 00:56:22.290787 kubelet[2727]: E0430 00:56:22.290749 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:22.660386 kubelet[2727]: E0430 00:56:22.660276 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:26.266218 kubelet[2727]: E0430 00:56:26.266172 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:26.668518 kubelet[2727]: E0430 00:56:26.667973 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:29.493358 kubelet[2727]: E0430 00:56:29.493323 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:31.024550 update_engine[1535]: I20250430 00:56:31.024477 1535 update_attempter.cc:509] Updating boot flags... Apr 30 00:56:31.041510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2812) Apr 30 00:56:31.072847 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2810) Apr 30 00:56:31.598319 kubelet[2727]: I0430 00:56:31.598270 2727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 30 00:56:31.598839 kubelet[2727]: I0430 00:56:31.598819 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 30 00:56:31.598873 containerd[1554]: time="2025-04-30T00:56:31.598631874Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 00:56:32.643159 kubelet[2727]: I0430 00:56:32.643123 2727 topology_manager.go:215] "Topology Admit Handler" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" podNamespace="kube-system" podName="cilium-s8ksr" Apr 30 00:56:32.645135 kubelet[2727]: I0430 00:56:32.644907 2727 topology_manager.go:215] "Topology Admit Handler" podUID="9eb8cc56-9276-414d-9bd6-d376031cb3f1" podNamespace="kube-system" podName="kube-proxy-lcxwr" Apr 30 00:56:32.725823 kubelet[2727]: I0430 00:56:32.725754 2727 topology_manager.go:215] "Topology Admit Handler" podUID="629d93b3-178c-402b-bad4-247be012ead9" podNamespace="kube-system" podName="cilium-operator-599987898-8fm96" Apr 30 00:56:32.737972 kubelet[2727]: I0430 00:56:32.730824 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-xtables-lock\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.737972 kubelet[2727]: I0430 00:56:32.730883 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-net\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.737972 kubelet[2727]: I0430 00:56:32.730967 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-lib-modules\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.737972 kubelet[2727]: I0430 00:56:32.730989 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf2x7\" (UniqueName: \"kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-kube-api-access-qf2x7\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.737972 kubelet[2727]: I0430 00:56:32.731013 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hostproc\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.737972 kubelet[2727]: I0430 00:56:32.731072 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cni-path\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738236 kubelet[2727]: I0430 00:56:32.731093 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9eb8cc56-9276-414d-9bd6-d376031cb3f1-lib-modules\") pod \"kube-proxy-lcxwr\" (UID: \"9eb8cc56-9276-414d-9bd6-d376031cb3f1\") " pod="kube-system/kube-proxy-lcxwr" Apr 30 00:56:32.738236 kubelet[2727]: I0430 00:56:32.731114 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-kernel\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738236 kubelet[2727]: I0430 00:56:32.731136 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9eb8cc56-9276-414d-9bd6-d376031cb3f1-kube-proxy\") pod \"kube-proxy-lcxwr\" (UID: \"9eb8cc56-9276-414d-9bd6-d376031cb3f1\") " pod="kube-system/kube-proxy-lcxwr" Apr 30 00:56:32.738236 kubelet[2727]: I0430 00:56:32.731209 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-run\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738236 kubelet[2727]: I0430 00:56:32.731232 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-etc-cni-netd\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738236 kubelet[2727]: I0430 00:56:32.731413 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hubble-tls\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738396 kubelet[2727]: I0430 00:56:32.731480 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkww\" (UniqueName: \"kubernetes.io/projected/9eb8cc56-9276-414d-9bd6-d376031cb3f1-kube-api-access-5lkww\") pod \"kube-proxy-lcxwr\" (UID: \"9eb8cc56-9276-414d-9bd6-d376031cb3f1\") " pod="kube-system/kube-proxy-lcxwr" Apr 30 00:56:32.738396 kubelet[2727]: I0430 00:56:32.731624 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-cgroup\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738396 kubelet[2727]: I0430 00:56:32.731646 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a2c9a16-9cd9-4236-a541-42db34bc5af5-clustermesh-secrets\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738396 kubelet[2727]: I0430 00:56:32.731739 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-config-path\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738396 kubelet[2727]: I0430 00:56:32.731797 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-bpf-maps\") pod \"cilium-s8ksr\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " pod="kube-system/cilium-s8ksr" Apr 30 00:56:32.738569 kubelet[2727]: I0430 00:56:32.731875 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9eb8cc56-9276-414d-9bd6-d376031cb3f1-xtables-lock\") pod \"kube-proxy-lcxwr\" (UID: \"9eb8cc56-9276-414d-9bd6-d376031cb3f1\") " pod="kube-system/kube-proxy-lcxwr" Apr 30 00:56:32.833103 kubelet[2727]: I0430 00:56:32.833059 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnw8f\" (UniqueName: \"kubernetes.io/projected/629d93b3-178c-402b-bad4-247be012ead9-kube-api-access-tnw8f\") pod \"cilium-operator-599987898-8fm96\" (UID: \"629d93b3-178c-402b-bad4-247be012ead9\") " pod="kube-system/cilium-operator-599987898-8fm96" Apr 30 00:56:32.833223 kubelet[2727]: I0430 00:56:32.833171 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/629d93b3-178c-402b-bad4-247be012ead9-cilium-config-path\") pod \"cilium-operator-599987898-8fm96\" (UID: \"629d93b3-178c-402b-bad4-247be012ead9\") " pod="kube-system/cilium-operator-599987898-8fm96" Apr 30 00:56:32.950533 kubelet[2727]: E0430 00:56:32.950491 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:32.951061 containerd[1554]: time="2025-04-30T00:56:32.951026375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8ksr,Uid:9a2c9a16-9cd9-4236-a541-42db34bc5af5,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:32.952411 kubelet[2727]: E0430 00:56:32.952392 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:32.952786 containerd[1554]: time="2025-04-30T00:56:32.952757601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcxwr,Uid:9eb8cc56-9276-414d-9bd6-d376031cb3f1,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:32.976397 containerd[1554]: time="2025-04-30T00:56:32.976243321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:32.976397 containerd[1554]: time="2025-04-30T00:56:32.976338920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:32.977213 containerd[1554]: time="2025-04-30T00:56:32.977013035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:32.977213 containerd[1554]: time="2025-04-30T00:56:32.977068474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:32.977213 containerd[1554]: time="2025-04-30T00:56:32.977084314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:32.977213 containerd[1554]: time="2025-04-30T00:56:32.977172473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:32.977387 containerd[1554]: time="2025-04-30T00:56:32.976351080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:32.977515 containerd[1554]: time="2025-04-30T00:56:32.977457591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:33.012970 containerd[1554]: time="2025-04-30T00:56:33.012928775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lcxwr,Uid:9eb8cc56-9276-414d-9bd6-d376031cb3f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c27ca7bf00bc8eca0a96247c3cd72f714640f11f1876629fd3fd5f480463ae03\"" Apr 30 00:56:33.016197 containerd[1554]: time="2025-04-30T00:56:33.016166029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s8ksr,Uid:9a2c9a16-9cd9-4236-a541-42db34bc5af5,Namespace:kube-system,Attempt:0,} returns sandbox id \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\"" Apr 30 00:56:33.017036 kubelet[2727]: E0430 00:56:33.017015 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:33.017585 kubelet[2727]: E0430 00:56:33.017556 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:33.022808 containerd[1554]: time="2025-04-30T00:56:33.022707576Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 30 00:56:33.023628 containerd[1554]: time="2025-04-30T00:56:33.023596208Z" level=info msg="CreateContainer within sandbox \"c27ca7bf00bc8eca0a96247c3cd72f714640f11f1876629fd3fd5f480463ae03\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 00:56:33.027959 kubelet[2727]: E0430 00:56:33.027925 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:33.028465 containerd[1554]: time="2025-04-30T00:56:33.028409650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8fm96,Uid:629d93b3-178c-402b-bad4-247be012ead9,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:33.046044 containerd[1554]: time="2025-04-30T00:56:33.045991107Z" level=info msg="CreateContainer within sandbox \"c27ca7bf00bc8eca0a96247c3cd72f714640f11f1876629fd3fd5f480463ae03\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ede0637c1f593ebaefa59b5e890f0c516958ccc4d3f28a70ce4c2087b2b7d701\"" Apr 30 00:56:33.047747 containerd[1554]: time="2025-04-30T00:56:33.047720613Z" level=info msg="StartContainer for \"ede0637c1f593ebaefa59b5e890f0c516958ccc4d3f28a70ce4c2087b2b7d701\"" Apr 30 00:56:33.053204 containerd[1554]: time="2025-04-30T00:56:33.051533103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:33.053204 containerd[1554]: time="2025-04-30T00:56:33.051581102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:33.053204 containerd[1554]: time="2025-04-30T00:56:33.051591622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:33.053204 containerd[1554]: time="2025-04-30T00:56:33.051667341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:33.100200 containerd[1554]: time="2025-04-30T00:56:33.100080830Z" level=info msg="StartContainer for \"ede0637c1f593ebaefa59b5e890f0c516958ccc4d3f28a70ce4c2087b2b7d701\" returns successfully" Apr 30 00:56:33.109993 containerd[1554]: time="2025-04-30T00:56:33.109849991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-8fm96,Uid:629d93b3-178c-402b-bad4-247be012ead9,Namespace:kube-system,Attempt:0,} returns sandbox id \"372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6\"" Apr 30 00:56:33.110625 kubelet[2727]: E0430 00:56:33.110604 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:33.679531 kubelet[2727]: E0430 00:56:33.679500 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:33.688851 kubelet[2727]: I0430 00:56:33.688778 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcxwr" podStartSLOduration=1.68875903 podStartE2EDuration="1.68875903s" podCreationTimestamp="2025-04-30 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:56:33.688392233 +0000 UTC m=+17.155256007" watchObservedRunningTime="2025-04-30 00:56:33.68875903 +0000 UTC m=+17.155622804" Apr 30 00:56:43.581730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687397124.mount: Deactivated successfully. Apr 30 00:56:44.923165 containerd[1554]: time="2025-04-30T00:56:44.923099115Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:44.928726 containerd[1554]: time="2025-04-30T00:56:44.928672687Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Apr 30 00:56:44.929782 containerd[1554]: time="2025-04-30T00:56:44.929747402Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:44.931522 containerd[1554]: time="2025-04-30T00:56:44.931478833Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 11.908712658s" Apr 30 00:56:44.931572 containerd[1554]: time="2025-04-30T00:56:44.931523793Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Apr 30 00:56:44.934189 containerd[1554]: time="2025-04-30T00:56:44.934144220Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 30 00:56:44.941921 containerd[1554]: time="2025-04-30T00:56:44.941859301Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:56:45.126537 containerd[1554]: time="2025-04-30T00:56:45.126482002Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\"" Apr 30 00:56:45.127210 containerd[1554]: time="2025-04-30T00:56:45.126982960Z" level=info msg="StartContainer for \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\"" Apr 30 00:56:45.174811 containerd[1554]: time="2025-04-30T00:56:45.174610931Z" level=info msg="StartContainer for \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\" returns successfully" Apr 30 00:56:45.259575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a-rootfs.mount: Deactivated successfully. Apr 30 00:56:45.337466 containerd[1554]: time="2025-04-30T00:56:45.337228069Z" level=info msg="shim disconnected" id=65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a namespace=k8s.io Apr 30 00:56:45.337466 containerd[1554]: time="2025-04-30T00:56:45.337284509Z" level=warning msg="cleaning up after shim disconnected" id=65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a namespace=k8s.io Apr 30 00:56:45.337466 containerd[1554]: time="2025-04-30T00:56:45.337292509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:56:45.701883 kubelet[2727]: E0430 00:56:45.701683 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:45.707085 containerd[1554]: time="2025-04-30T00:56:45.706849933Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:56:45.723002 containerd[1554]: time="2025-04-30T00:56:45.722955735Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\"" Apr 30 00:56:45.724971 containerd[1554]: time="2025-04-30T00:56:45.724124649Z" level=info msg="StartContainer for \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\"" Apr 30 00:56:45.771714 containerd[1554]: time="2025-04-30T00:56:45.771661621Z" level=info msg="StartContainer for \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\" returns successfully" Apr 30 00:56:45.790662 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 00:56:45.791558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:56:45.791636 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:56:45.799107 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 00:56:45.813141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 00:56:45.817869 containerd[1554]: time="2025-04-30T00:56:45.817661080Z" level=info msg="shim disconnected" id=363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f namespace=k8s.io Apr 30 00:56:45.817869 containerd[1554]: time="2025-04-30T00:56:45.817716480Z" level=warning msg="cleaning up after shim disconnected" id=363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f namespace=k8s.io Apr 30 00:56:45.817869 containerd[1554]: time="2025-04-30T00:56:45.817724800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:56:46.019782 systemd[1]: Started sshd@7-10.0.0.142:22-10.0.0.1:51856.service - OpenSSH per-connection server daemon (10.0.0.1:51856). Apr 30 00:56:46.059373 sshd[3259]: Accepted publickey for core from 10.0.0.1 port 51856 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:56:46.060763 sshd[3259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:56:46.065540 systemd-logind[1528]: New session 8 of user core. Apr 30 00:56:46.075808 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 00:56:46.202635 sshd[3259]: pam_unix(sshd:session): session closed for user core Apr 30 00:56:46.207188 systemd[1]: sshd@7-10.0.0.142:22-10.0.0.1:51856.service: Deactivated successfully. Apr 30 00:56:46.209229 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Apr 30 00:56:46.209807 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 00:56:46.210822 systemd-logind[1528]: Removed session 8. Apr 30 00:56:46.705514 kubelet[2727]: E0430 00:56:46.705478 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:46.707821 containerd[1554]: time="2025-04-30T00:56:46.707784125Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:56:46.740440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807709900.mount: Deactivated successfully. Apr 30 00:56:46.745519 containerd[1554]: time="2025-04-30T00:56:46.745354431Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\"" Apr 30 00:56:46.746323 containerd[1554]: time="2025-04-30T00:56:46.746286587Z" level=info msg="StartContainer for \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\"" Apr 30 00:56:46.802894 containerd[1554]: time="2025-04-30T00:56:46.797969747Z" level=info msg="StartContainer for \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\" returns successfully" Apr 30 00:56:46.878635 containerd[1554]: time="2025-04-30T00:56:46.878563934Z" level=info msg="shim disconnected" id=2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5 namespace=k8s.io Apr 30 00:56:46.878635 containerd[1554]: time="2025-04-30T00:56:46.878621414Z" level=warning msg="cleaning up after shim disconnected" id=2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5 namespace=k8s.io Apr 30 00:56:46.878635 containerd[1554]: time="2025-04-30T00:56:46.878631974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:56:47.709261 kubelet[2727]: E0430 00:56:47.709205 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:47.714275 containerd[1554]: time="2025-04-30T00:56:47.714060261Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:56:47.732791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount188002556.mount: Deactivated successfully. Apr 30 00:56:47.746105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404379840.mount: Deactivated successfully. Apr 30 00:56:47.751526 containerd[1554]: time="2025-04-30T00:56:47.751479574Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\"" Apr 30 00:56:47.753265 containerd[1554]: time="2025-04-30T00:56:47.752511050Z" level=info msg="StartContainer for \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\"" Apr 30 00:56:47.810806 containerd[1554]: time="2025-04-30T00:56:47.810764109Z" level=info msg="StartContainer for \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\" returns successfully" Apr 30 00:56:47.832666 containerd[1554]: time="2025-04-30T00:56:47.832587012Z" level=info msg="shim disconnected" id=5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff namespace=k8s.io Apr 30 00:56:47.832666 containerd[1554]: time="2025-04-30T00:56:47.832653972Z" level=warning msg="cleaning up after shim disconnected" id=5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff namespace=k8s.io Apr 30 00:56:47.832666 containerd[1554]: time="2025-04-30T00:56:47.832665051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:56:48.713343 kubelet[2727]: E0430 00:56:48.713302 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:48.716517 containerd[1554]: time="2025-04-30T00:56:48.716264254Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:56:48.939305 containerd[1554]: time="2025-04-30T00:56:48.939244812Z" level=info msg="CreateContainer within sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\"" Apr 30 00:56:48.939819 containerd[1554]: time="2025-04-30T00:56:48.939786730Z" level=info msg="StartContainer for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\"" Apr 30 00:56:49.000371 containerd[1554]: time="2025-04-30T00:56:49.000198829Z" level=info msg="StartContainer for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" returns successfully" Apr 30 00:56:49.215561 kubelet[2727]: I0430 00:56:49.212588 2727 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 30 00:56:49.248063 kubelet[2727]: I0430 00:56:49.248004 2727 topology_manager.go:215] "Topology Admit Handler" podUID="5fcea780-bb06-4226-abdd-bad41a12aac3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qpdgd" Apr 30 00:56:49.248214 kubelet[2727]: I0430 00:56:49.248205 2727 topology_manager.go:215] "Topology Admit Handler" podUID="c645b65a-43a6-462f-8cc7-da654fe2b472" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fqg6h" Apr 30 00:56:49.356458 kubelet[2727]: I0430 00:56:49.356314 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fcea780-bb06-4226-abdd-bad41a12aac3-config-volume\") pod \"coredns-7db6d8ff4d-qpdgd\" (UID: \"5fcea780-bb06-4226-abdd-bad41a12aac3\") " pod="kube-system/coredns-7db6d8ff4d-qpdgd" Apr 30 00:56:49.356458 kubelet[2727]: I0430 00:56:49.356358 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c645b65a-43a6-462f-8cc7-da654fe2b472-config-volume\") pod \"coredns-7db6d8ff4d-fqg6h\" (UID: \"c645b65a-43a6-462f-8cc7-da654fe2b472\") " pod="kube-system/coredns-7db6d8ff4d-fqg6h" Apr 30 00:56:49.356458 kubelet[2727]: I0430 00:56:49.356380 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5r6c\" (UniqueName: \"kubernetes.io/projected/c645b65a-43a6-462f-8cc7-da654fe2b472-kube-api-access-g5r6c\") pod \"coredns-7db6d8ff4d-fqg6h\" (UID: \"c645b65a-43a6-462f-8cc7-da654fe2b472\") " pod="kube-system/coredns-7db6d8ff4d-fqg6h" Apr 30 00:56:49.356458 kubelet[2727]: I0430 00:56:49.356453 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgzt\" (UniqueName: \"kubernetes.io/projected/5fcea780-bb06-4226-abdd-bad41a12aac3-kube-api-access-twgzt\") pod \"coredns-7db6d8ff4d-qpdgd\" (UID: \"5fcea780-bb06-4226-abdd-bad41a12aac3\") " pod="kube-system/coredns-7db6d8ff4d-qpdgd" Apr 30 00:56:49.554904 kubelet[2727]: E0430 00:56:49.554858 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:49.556854 containerd[1554]: time="2025-04-30T00:56:49.556691109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fqg6h,Uid:c645b65a-43a6-462f-8cc7-da654fe2b472,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:49.560366 kubelet[2727]: E0430 00:56:49.560244 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:49.562455 containerd[1554]: time="2025-04-30T00:56:49.562195926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qpdgd,Uid:5fcea780-bb06-4226-abdd-bad41a12aac3,Namespace:kube-system,Attempt:0,}" Apr 30 00:56:49.741904 kubelet[2727]: E0430 00:56:49.741878 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:49.786748 containerd[1554]: time="2025-04-30T00:56:49.786667831Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:49.787425 containerd[1554]: time="2025-04-30T00:56:49.787345828Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Apr 30 00:56:49.788902 containerd[1554]: time="2025-04-30T00:56:49.788859342Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 00:56:49.790042 containerd[1554]: time="2025-04-30T00:56:49.790007417Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.855814198s" Apr 30 00:56:49.790042 containerd[1554]: time="2025-04-30T00:56:49.790040417Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Apr 30 00:56:49.796844 containerd[1554]: time="2025-04-30T00:56:49.796451550Z" level=info msg="CreateContainer within sandbox \"372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 30 00:56:49.810632 containerd[1554]: time="2025-04-30T00:56:49.810508491Z" level=info msg="CreateContainer within sandbox \"372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\"" Apr 30 00:56:49.811057 containerd[1554]: time="2025-04-30T00:56:49.811022449Z" level=info msg="StartContainer for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\"" Apr 30 00:56:49.881347 containerd[1554]: time="2025-04-30T00:56:49.881295036Z" level=info msg="StartContainer for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" returns successfully" Apr 30 00:56:50.743716 kubelet[2727]: E0430 00:56:50.743683 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:50.744102 kubelet[2727]: E0430 00:56:50.743920 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:50.797297 kubelet[2727]: I0430 00:56:50.797237 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s8ksr" podStartSLOduration=6.884718611 podStartE2EDuration="18.797216766s" podCreationTimestamp="2025-04-30 00:56:32 +0000 UTC" firstStartedPulling="2025-04-30 00:56:33.021509425 +0000 UTC m=+16.488373119" lastFinishedPulling="2025-04-30 00:56:44.93400754 +0000 UTC m=+28.400871274" observedRunningTime="2025-04-30 00:56:49.760010142 +0000 UTC m=+33.226873916" watchObservedRunningTime="2025-04-30 00:56:50.797216766 +0000 UTC m=+34.264080500" Apr 30 00:56:51.219758 systemd[1]: Started sshd@8-10.0.0.142:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Apr 30 00:56:51.263257 sshd[3602]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:56:51.265156 sshd[3602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:56:51.269396 systemd-logind[1528]: New session 9 of user core. Apr 30 00:56:51.278780 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 00:56:51.404372 sshd[3602]: pam_unix(sshd:session): session closed for user core Apr 30 00:56:51.408160 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Apr 30 00:56:51.408544 systemd[1]: sshd@8-10.0.0.142:22-10.0.0.1:51864.service: Deactivated successfully. Apr 30 00:56:51.410841 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 00:56:51.411970 systemd-logind[1528]: Removed session 9. Apr 30 00:56:51.746996 kubelet[2727]: E0430 00:56:51.746956 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:51.747940 kubelet[2727]: E0430 00:56:51.747921 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:53.465415 systemd-networkd[1238]: cilium_host: Link UP Apr 30 00:56:53.465738 systemd-networkd[1238]: cilium_net: Link UP Apr 30 00:56:53.465967 systemd-networkd[1238]: cilium_net: Gained carrier Apr 30 00:56:53.466112 systemd-networkd[1238]: cilium_host: Gained carrier Apr 30 00:56:53.466213 systemd-networkd[1238]: cilium_net: Gained IPv6LL Apr 30 00:56:53.466324 systemd-networkd[1238]: cilium_host: Gained IPv6LL Apr 30 00:56:53.572008 systemd-networkd[1238]: cilium_vxlan: Link UP Apr 30 00:56:53.572015 systemd-networkd[1238]: cilium_vxlan: Gained carrier Apr 30 00:56:53.900513 kernel: NET: Registered PF_ALG protocol family Apr 30 00:56:54.499168 systemd-networkd[1238]: lxc_health: Link UP Apr 30 00:56:54.508675 systemd-networkd[1238]: lxc_health: Gained carrier Apr 30 00:56:54.772389 systemd-networkd[1238]: lxc9e45bfbed739: Link UP Apr 30 00:56:54.788697 systemd-networkd[1238]: lxc42f119d559d7: Link UP Apr 30 00:56:54.791601 kernel: eth0: renamed from tmp175f1 Apr 30 00:56:54.796511 systemd-networkd[1238]: lxc9e45bfbed739: Gained carrier Apr 30 00:56:54.797474 kernel: eth0: renamed from tmpb7321 Apr 30 00:56:54.801244 systemd-networkd[1238]: lxc42f119d559d7: Gained carrier Apr 30 00:56:54.958194 kubelet[2727]: E0430 00:56:54.957984 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:54.981017 kubelet[2727]: I0430 00:56:54.980237 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-8fm96" podStartSLOduration=6.300233485 podStartE2EDuration="22.980213517s" podCreationTimestamp="2025-04-30 00:56:32 +0000 UTC" firstStartedPulling="2025-04-30 00:56:33.11121642 +0000 UTC m=+16.578080154" lastFinishedPulling="2025-04-30 00:56:49.791196452 +0000 UTC m=+33.258060186" observedRunningTime="2025-04-30 00:56:50.799467357 +0000 UTC m=+34.266331091" watchObservedRunningTime="2025-04-30 00:56:54.980213517 +0000 UTC m=+38.447077331" Apr 30 00:56:55.254646 systemd-networkd[1238]: cilium_vxlan: Gained IPv6LL Apr 30 00:56:55.753087 kubelet[2727]: E0430 00:56:55.753033 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:56.407679 systemd-networkd[1238]: lxc_health: Gained IPv6LL Apr 30 00:56:56.413694 systemd[1]: Started sshd@9-10.0.0.142:22-10.0.0.1:35238.service - OpenSSH per-connection server daemon (10.0.0.1:35238). Apr 30 00:56:56.452428 sshd[3998]: Accepted publickey for core from 10.0.0.1 port 35238 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:56:56.453516 sshd[3998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:56:56.457746 systemd-logind[1528]: New session 10 of user core. Apr 30 00:56:56.462685 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 00:56:56.470783 systemd-networkd[1238]: lxc9e45bfbed739: Gained IPv6LL Apr 30 00:56:56.535871 systemd-networkd[1238]: lxc42f119d559d7: Gained IPv6LL Apr 30 00:56:56.595307 sshd[3998]: pam_unix(sshd:session): session closed for user core Apr 30 00:56:56.598893 systemd[1]: sshd@9-10.0.0.142:22-10.0.0.1:35238.service: Deactivated successfully. Apr 30 00:56:56.601272 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Apr 30 00:56:56.601291 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 00:56:56.602923 systemd-logind[1528]: Removed session 10. Apr 30 00:56:58.432071 containerd[1554]: time="2025-04-30T00:56:58.431821788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:58.432071 containerd[1554]: time="2025-04-30T00:56:58.431998348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:58.432710 containerd[1554]: time="2025-04-30T00:56:58.432025468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:58.433176 containerd[1554]: time="2025-04-30T00:56:58.432575546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:58.433556 containerd[1554]: time="2025-04-30T00:56:58.433477023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:56:58.433715 containerd[1554]: time="2025-04-30T00:56:58.433670662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:56:58.433866 containerd[1554]: time="2025-04-30T00:56:58.433783742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:58.434070 containerd[1554]: time="2025-04-30T00:56:58.434001101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:56:58.458674 systemd-resolved[1443]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:56:58.459939 systemd-resolved[1443]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 30 00:56:58.485753 containerd[1554]: time="2025-04-30T00:56:58.485714775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fqg6h,Uid:c645b65a-43a6-462f-8cc7-da654fe2b472,Namespace:kube-system,Attempt:0,} returns sandbox id \"175f1f45cdd41b8709a1d44aaa132f6400917f057f009d8065a333a76de71159\"" Apr 30 00:56:58.487256 kubelet[2727]: E0430 00:56:58.487228 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:58.490492 containerd[1554]: time="2025-04-30T00:56:58.490345160Z" level=info msg="CreateContainer within sandbox \"175f1f45cdd41b8709a1d44aaa132f6400917f057f009d8065a333a76de71159\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:56:58.491389 containerd[1554]: time="2025-04-30T00:56:58.491362757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qpdgd,Uid:5fcea780-bb06-4226-abdd-bad41a12aac3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7321e2d16a91e8871a3d966b495f7ea477830177e395c965a0dd42bb1173acd\"" Apr 30 00:56:58.492982 kubelet[2727]: E0430 00:56:58.492930 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:58.494917 containerd[1554]: time="2025-04-30T00:56:58.494783866Z" level=info msg="CreateContainer within sandbox \"b7321e2d16a91e8871a3d966b495f7ea477830177e395c965a0dd42bb1173acd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 30 00:56:58.512314 containerd[1554]: time="2025-04-30T00:56:58.512260370Z" level=info msg="CreateContainer within sandbox \"175f1f45cdd41b8709a1d44aaa132f6400917f057f009d8065a333a76de71159\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"327c707d0aa0518b6cd13a121ec39f77cdb9856985855628baa39685b0793adc\"" Apr 30 00:56:58.512862 containerd[1554]: time="2025-04-30T00:56:58.512826848Z" level=info msg="StartContainer for \"327c707d0aa0518b6cd13a121ec39f77cdb9856985855628baa39685b0793adc\"" Apr 30 00:56:58.515751 containerd[1554]: time="2025-04-30T00:56:58.515607239Z" level=info msg="CreateContainer within sandbox \"b7321e2d16a91e8871a3d966b495f7ea477830177e395c965a0dd42bb1173acd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"76aa7b15c72e5f03322c18d8dfe1fd4b02c8e83758d792f5126e16c77c77e769\"" Apr 30 00:56:58.517315 containerd[1554]: time="2025-04-30T00:56:58.516796755Z" level=info msg="StartContainer for \"76aa7b15c72e5f03322c18d8dfe1fd4b02c8e83758d792f5126e16c77c77e769\"" Apr 30 00:56:58.580860 containerd[1554]: time="2025-04-30T00:56:58.578110358Z" level=info msg="StartContainer for \"327c707d0aa0518b6cd13a121ec39f77cdb9856985855628baa39685b0793adc\" returns successfully" Apr 30 00:56:58.580860 containerd[1554]: time="2025-04-30T00:56:58.578193318Z" level=info msg="StartContainer for \"76aa7b15c72e5f03322c18d8dfe1fd4b02c8e83758d792f5126e16c77c77e769\" returns successfully" Apr 30 00:56:58.765131 kubelet[2727]: E0430 00:56:58.765011 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:58.768024 kubelet[2727]: E0430 00:56:58.767989 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:58.786542 kubelet[2727]: I0430 00:56:58.786487 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qpdgd" podStartSLOduration=26.786469008 podStartE2EDuration="26.786469008s" podCreationTimestamp="2025-04-30 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:56:58.786463448 +0000 UTC m=+42.253327262" watchObservedRunningTime="2025-04-30 00:56:58.786469008 +0000 UTC m=+42.253332742" Apr 30 00:56:58.801091 kubelet[2727]: I0430 00:56:58.800904 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fqg6h" podStartSLOduration=26.800885562 podStartE2EDuration="26.800885562s" podCreationTimestamp="2025-04-30 00:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:56:58.800350924 +0000 UTC m=+42.267214658" watchObservedRunningTime="2025-04-30 00:56:58.800885562 +0000 UTC m=+42.267749256" Apr 30 00:56:59.770189 kubelet[2727]: E0430 00:56:59.770085 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:56:59.770189 kubelet[2727]: E0430 00:56:59.770105 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:00.771765 kubelet[2727]: E0430 00:57:00.771724 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:00.775890 kubelet[2727]: E0430 00:57:00.772080 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:01.617739 systemd[1]: Started sshd@10-10.0.0.142:22-10.0.0.1:35246.service - OpenSSH per-connection server daemon (10.0.0.1:35246). Apr 30 00:57:01.662640 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 35246 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:01.664365 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:01.675356 systemd-logind[1528]: New session 11 of user core. Apr 30 00:57:01.685264 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 00:57:01.812568 sshd[4189]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:01.821899 systemd[1]: Started sshd@11-10.0.0.142:22-10.0.0.1:35262.service - OpenSSH per-connection server daemon (10.0.0.1:35262). Apr 30 00:57:01.822396 systemd[1]: sshd@10-10.0.0.142:22-10.0.0.1:35246.service: Deactivated successfully. Apr 30 00:57:01.826983 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Apr 30 00:57:01.827656 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 00:57:01.829087 systemd-logind[1528]: Removed session 11. Apr 30 00:57:01.864859 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 35262 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:01.866682 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:01.871100 systemd-logind[1528]: New session 12 of user core. Apr 30 00:57:01.883764 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 30 00:57:02.039679 sshd[4202]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:02.048846 systemd[1]: Started sshd@12-10.0.0.142:22-10.0.0.1:35278.service - OpenSSH per-connection server daemon (10.0.0.1:35278). Apr 30 00:57:02.049318 systemd[1]: sshd@11-10.0.0.142:22-10.0.0.1:35262.service: Deactivated successfully. Apr 30 00:57:02.051173 systemd[1]: session-12.scope: Deactivated successfully. Apr 30 00:57:02.055180 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Apr 30 00:57:02.062533 systemd-logind[1528]: Removed session 12. Apr 30 00:57:02.098111 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 35278 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:02.099500 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:02.105195 systemd-logind[1528]: New session 13 of user core. Apr 30 00:57:02.115850 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 30 00:57:02.229676 sshd[4217]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:02.233015 systemd[1]: sshd@12-10.0.0.142:22-10.0.0.1:35278.service: Deactivated successfully. Apr 30 00:57:02.236026 systemd[1]: session-13.scope: Deactivated successfully. Apr 30 00:57:02.236959 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Apr 30 00:57:02.238001 systemd-logind[1528]: Removed session 13. Apr 30 00:57:07.241782 systemd[1]: Started sshd@13-10.0.0.142:22-10.0.0.1:43394.service - OpenSSH per-connection server daemon (10.0.0.1:43394). Apr 30 00:57:07.279477 sshd[4236]: Accepted publickey for core from 10.0.0.1 port 43394 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:07.280955 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:07.284923 systemd-logind[1528]: New session 14 of user core. Apr 30 00:57:07.295856 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 30 00:57:07.431873 sshd[4236]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:07.435056 systemd[1]: sshd@13-10.0.0.142:22-10.0.0.1:43394.service: Deactivated successfully. Apr 30 00:57:07.439499 systemd[1]: session-14.scope: Deactivated successfully. Apr 30 00:57:07.439664 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Apr 30 00:57:07.441388 systemd-logind[1528]: Removed session 14. Apr 30 00:57:12.443731 systemd[1]: Started sshd@14-10.0.0.142:22-10.0.0.1:43410.service - OpenSSH per-connection server daemon (10.0.0.1:43410). Apr 30 00:57:12.488049 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 43410 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:12.489548 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:12.498745 systemd-logind[1528]: New session 15 of user core. Apr 30 00:57:12.508803 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 30 00:57:12.645977 sshd[4251]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:12.658804 systemd[1]: Started sshd@15-10.0.0.142:22-10.0.0.1:38868.service - OpenSSH per-connection server daemon (10.0.0.1:38868). Apr 30 00:57:12.659262 systemd[1]: sshd@14-10.0.0.142:22-10.0.0.1:43410.service: Deactivated successfully. Apr 30 00:57:12.665899 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Apr 30 00:57:12.667378 systemd[1]: session-15.scope: Deactivated successfully. Apr 30 00:57:12.669238 systemd-logind[1528]: Removed session 15. Apr 30 00:57:12.701595 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 38868 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:12.702501 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:12.711182 systemd-logind[1528]: New session 16 of user core. Apr 30 00:57:12.729794 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 30 00:57:12.934087 sshd[4263]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:12.942962 systemd[1]: Started sshd@16-10.0.0.142:22-10.0.0.1:38872.service - OpenSSH per-connection server daemon (10.0.0.1:38872). Apr 30 00:57:12.943427 systemd[1]: sshd@15-10.0.0.142:22-10.0.0.1:38868.service: Deactivated successfully. Apr 30 00:57:12.950021 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Apr 30 00:57:12.950640 systemd[1]: session-16.scope: Deactivated successfully. Apr 30 00:57:12.952123 systemd-logind[1528]: Removed session 16. Apr 30 00:57:12.990406 sshd[4276]: Accepted publickey for core from 10.0.0.1 port 38872 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:12.992171 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:12.997625 systemd-logind[1528]: New session 17 of user core. Apr 30 00:57:13.007886 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 30 00:57:14.540799 sshd[4276]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:14.548850 systemd[1]: Started sshd@17-10.0.0.142:22-10.0.0.1:38874.service - OpenSSH per-connection server daemon (10.0.0.1:38874). Apr 30 00:57:14.550806 systemd[1]: sshd@16-10.0.0.142:22-10.0.0.1:38872.service: Deactivated successfully. Apr 30 00:57:14.556684 systemd[1]: session-17.scope: Deactivated successfully. Apr 30 00:57:14.561836 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Apr 30 00:57:14.566789 systemd-logind[1528]: Removed session 17. Apr 30 00:57:14.598483 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 38874 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:14.599965 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:14.606271 systemd-logind[1528]: New session 18 of user core. Apr 30 00:57:14.618837 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 30 00:57:14.880357 sshd[4295]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:14.893863 systemd[1]: Started sshd@18-10.0.0.142:22-10.0.0.1:38876.service - OpenSSH per-connection server daemon (10.0.0.1:38876). Apr 30 00:57:14.894426 systemd[1]: sshd@17-10.0.0.142:22-10.0.0.1:38874.service: Deactivated successfully. Apr 30 00:57:14.896595 systemd[1]: session-18.scope: Deactivated successfully. Apr 30 00:57:14.898529 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Apr 30 00:57:14.900173 systemd-logind[1528]: Removed session 18. Apr 30 00:57:14.933997 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 38876 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:14.935709 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:14.941493 systemd-logind[1528]: New session 19 of user core. Apr 30 00:57:14.956922 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 30 00:57:15.089101 sshd[4311]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:15.093040 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Apr 30 00:57:15.093228 systemd[1]: sshd@18-10.0.0.142:22-10.0.0.1:38876.service: Deactivated successfully. Apr 30 00:57:15.098900 systemd[1]: session-19.scope: Deactivated successfully. Apr 30 00:57:15.100035 systemd-logind[1528]: Removed session 19. Apr 30 00:57:20.103759 systemd[1]: Started sshd@19-10.0.0.142:22-10.0.0.1:38888.service - OpenSSH per-connection server daemon (10.0.0.1:38888). Apr 30 00:57:20.154623 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 38888 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:20.156144 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:20.162003 systemd-logind[1528]: New session 20 of user core. Apr 30 00:57:20.172002 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 30 00:57:20.296273 sshd[4334]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:20.300038 systemd[1]: sshd@19-10.0.0.142:22-10.0.0.1:38888.service: Deactivated successfully. Apr 30 00:57:20.302368 systemd-logind[1528]: Session 20 logged out. Waiting for processes to exit. Apr 30 00:57:20.302992 systemd[1]: session-20.scope: Deactivated successfully. Apr 30 00:57:20.303992 systemd-logind[1528]: Removed session 20. Apr 30 00:57:25.312753 systemd[1]: Started sshd@20-10.0.0.142:22-10.0.0.1:48528.service - OpenSSH per-connection server daemon (10.0.0.1:48528). Apr 30 00:57:25.356306 sshd[4349]: Accepted publickey for core from 10.0.0.1 port 48528 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:25.357276 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:25.362820 systemd-logind[1528]: New session 21 of user core. Apr 30 00:57:25.370747 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 30 00:57:25.501293 sshd[4349]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:25.506330 systemd[1]: sshd@20-10.0.0.142:22-10.0.0.1:48528.service: Deactivated successfully. Apr 30 00:57:25.512057 systemd[1]: session-21.scope: Deactivated successfully. Apr 30 00:57:25.513588 systemd-logind[1528]: Session 21 logged out. Waiting for processes to exit. Apr 30 00:57:25.514887 systemd-logind[1528]: Removed session 21. Apr 30 00:57:30.516980 systemd[1]: Started sshd@21-10.0.0.142:22-10.0.0.1:48532.service - OpenSSH per-connection server daemon (10.0.0.1:48532). Apr 30 00:57:30.557175 sshd[4364]: Accepted publickey for core from 10.0.0.1 port 48532 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:30.558960 sshd[4364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:30.563587 systemd-logind[1528]: New session 22 of user core. Apr 30 00:57:30.573832 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 30 00:57:30.688843 sshd[4364]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:30.697694 systemd[1]: Started sshd@22-10.0.0.142:22-10.0.0.1:48536.service - OpenSSH per-connection server daemon (10.0.0.1:48536). Apr 30 00:57:30.698107 systemd[1]: sshd@21-10.0.0.142:22-10.0.0.1:48532.service: Deactivated successfully. Apr 30 00:57:30.700496 systemd[1]: session-22.scope: Deactivated successfully. Apr 30 00:57:30.701334 systemd-logind[1528]: Session 22 logged out. Waiting for processes to exit. Apr 30 00:57:30.705378 systemd-logind[1528]: Removed session 22. Apr 30 00:57:30.736018 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 48536 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:30.737375 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:30.743397 systemd-logind[1528]: New session 23 of user core. Apr 30 00:57:30.752802 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 30 00:57:32.983837 containerd[1554]: time="2025-04-30T00:57:32.983726520Z" level=info msg="StopContainer for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" with timeout 30 (s)" Apr 30 00:57:32.985694 containerd[1554]: time="2025-04-30T00:57:32.985339062Z" level=info msg="Stop container \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" with signal terminated" Apr 30 00:57:32.994163 containerd[1554]: time="2025-04-30T00:57:32.994116618Z" level=info msg="StopContainer for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" with timeout 2 (s)" Apr 30 00:57:32.994525 containerd[1554]: time="2025-04-30T00:57:32.994434622Z" level=info msg="Stop container \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" with signal terminated" Apr 30 00:57:32.994910 containerd[1554]: time="2025-04-30T00:57:32.994753506Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 00:57:33.003195 systemd-networkd[1238]: lxc_health: Link DOWN Apr 30 00:57:33.003209 systemd-networkd[1238]: lxc_health: Lost carrier Apr 30 00:57:33.023769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97-rootfs.mount: Deactivated successfully. Apr 30 00:57:33.042501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6-rootfs.mount: Deactivated successfully. Apr 30 00:57:33.076850 containerd[1554]: time="2025-04-30T00:57:33.076778039Z" level=info msg="shim disconnected" id=795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97 namespace=k8s.io Apr 30 00:57:33.076850 containerd[1554]: time="2025-04-30T00:57:33.076839999Z" level=warning msg="cleaning up after shim disconnected" id=795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97 namespace=k8s.io Apr 30 00:57:33.076850 containerd[1554]: time="2025-04-30T00:57:33.076849240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:33.083763 containerd[1554]: time="2025-04-30T00:57:33.083687647Z" level=info msg="shim disconnected" id=d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6 namespace=k8s.io Apr 30 00:57:33.083763 containerd[1554]: time="2025-04-30T00:57:33.083746288Z" level=warning msg="cleaning up after shim disconnected" id=d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6 namespace=k8s.io Apr 30 00:57:33.083763 containerd[1554]: time="2025-04-30T00:57:33.083755128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:33.119386 containerd[1554]: time="2025-04-30T00:57:33.119334144Z" level=info msg="StopContainer for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" returns successfully" Apr 30 00:57:33.120121 containerd[1554]: time="2025-04-30T00:57:33.119968232Z" level=info msg="StopPodSandbox for \"372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6\"" Apr 30 00:57:33.120121 containerd[1554]: time="2025-04-30T00:57:33.120007192Z" level=info msg="Container to stop \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:57:33.120819 containerd[1554]: time="2025-04-30T00:57:33.120784642Z" level=info msg="StopContainer for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" returns successfully" Apr 30 00:57:33.121311 containerd[1554]: time="2025-04-30T00:57:33.121138367Z" level=info msg="StopPodSandbox for \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\"" Apr 30 00:57:33.121311 containerd[1554]: time="2025-04-30T00:57:33.121177087Z" level=info msg="Container to stop \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:57:33.121311 containerd[1554]: time="2025-04-30T00:57:33.121189247Z" level=info msg="Container to stop \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:57:33.121311 containerd[1554]: time="2025-04-30T00:57:33.121198448Z" level=info msg="Container to stop \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:57:33.121311 containerd[1554]: time="2025-04-30T00:57:33.121208848Z" level=info msg="Container to stop \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:57:33.121311 containerd[1554]: time="2025-04-30T00:57:33.121217768Z" level=info msg="Container to stop \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 30 00:57:33.122376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6-shm.mount: Deactivated successfully. Apr 30 00:57:33.124885 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c-shm.mount: Deactivated successfully. Apr 30 00:57:33.160791 containerd[1554]: time="2025-04-30T00:57:33.160574032Z" level=info msg="shim disconnected" id=372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6 namespace=k8s.io Apr 30 00:57:33.160791 containerd[1554]: time="2025-04-30T00:57:33.160640913Z" level=warning msg="cleaning up after shim disconnected" id=372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6 namespace=k8s.io Apr 30 00:57:33.160791 containerd[1554]: time="2025-04-30T00:57:33.160649593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:33.180235 containerd[1554]: time="2025-04-30T00:57:33.180172283Z" level=info msg="shim disconnected" id=41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c namespace=k8s.io Apr 30 00:57:33.180235 containerd[1554]: time="2025-04-30T00:57:33.180229803Z" level=warning msg="cleaning up after shim disconnected" id=41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c namespace=k8s.io Apr 30 00:57:33.180235 containerd[1554]: time="2025-04-30T00:57:33.180240124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:33.182688 containerd[1554]: time="2025-04-30T00:57:33.182640794Z" level=warning msg="cleanup warnings time=\"2025-04-30T00:57:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 30 00:57:33.186498 containerd[1554]: time="2025-04-30T00:57:33.184326456Z" level=info msg="TearDown network for sandbox \"372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6\" successfully" Apr 30 00:57:33.186633 containerd[1554]: time="2025-04-30T00:57:33.186506604Z" level=info msg="StopPodSandbox for \"372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6\" returns successfully" Apr 30 00:57:33.197958 containerd[1554]: time="2025-04-30T00:57:33.197912030Z" level=info msg="TearDown network for sandbox \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" successfully" Apr 30 00:57:33.197958 containerd[1554]: time="2025-04-30T00:57:33.197951670Z" level=info msg="StopPodSandbox for \"41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c\" returns successfully" Apr 30 00:57:33.264385 kubelet[2727]: I0430 00:57:33.264257 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cni-path\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.264385 kubelet[2727]: I0430 00:57:33.264314 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/629d93b3-178c-402b-bad4-247be012ead9-cilium-config-path\") pod \"629d93b3-178c-402b-bad4-247be012ead9\" (UID: \"629d93b3-178c-402b-bad4-247be012ead9\") " Apr 30 00:57:33.264385 kubelet[2727]: I0430 00:57:33.264332 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-net\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.264385 kubelet[2727]: I0430 00:57:33.264351 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a2c9a16-9cd9-4236-a541-42db34bc5af5-clustermesh-secrets\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.264914 kubelet[2727]: I0430 00:57:33.264367 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-kernel\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265133 kubelet[2727]: I0430 00:57:33.265112 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-cgroup\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265167 kubelet[2727]: I0430 00:57:33.265150 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-etc-cni-netd\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265201 kubelet[2727]: I0430 00:57:33.265177 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf2x7\" (UniqueName: \"kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-kube-api-access-qf2x7\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265228 kubelet[2727]: I0430 00:57:33.265218 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-run\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265251 kubelet[2727]: I0430 00:57:33.265236 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-config-path\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265275 kubelet[2727]: I0430 00:57:33.265252 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-bpf-maps\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265275 kubelet[2727]: I0430 00:57:33.265268 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hubble-tls\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265321 kubelet[2727]: I0430 00:57:33.265281 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-xtables-lock\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265321 kubelet[2727]: I0430 00:57:33.265295 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hostproc\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.265321 kubelet[2727]: I0430 00:57:33.265312 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnw8f\" (UniqueName: \"kubernetes.io/projected/629d93b3-178c-402b-bad4-247be012ead9-kube-api-access-tnw8f\") pod \"629d93b3-178c-402b-bad4-247be012ead9\" (UID: \"629d93b3-178c-402b-bad4-247be012ead9\") " Apr 30 00:57:33.265406 kubelet[2727]: I0430 00:57:33.265327 2727 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-lib-modules\") pod \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\" (UID: \"9a2c9a16-9cd9-4236-a541-42db34bc5af5\") " Apr 30 00:57:33.268800 kubelet[2727]: I0430 00:57:33.268764 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.268902 kubelet[2727]: I0430 00:57:33.268838 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.268902 kubelet[2727]: I0430 00:57:33.268858 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.268902 kubelet[2727]: I0430 00:57:33.268873 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.271476 kubelet[2727]: I0430 00:57:33.271058 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.275845 kubelet[2727]: I0430 00:57:33.275797 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/629d93b3-178c-402b-bad4-247be012ead9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "629d93b3-178c-402b-bad4-247be012ead9" (UID: "629d93b3-178c-402b-bad4-247be012ead9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:57:33.276325 kubelet[2727]: I0430 00:57:33.276022 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cni-path" (OuterVolumeSpecName: "cni-path") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.276325 kubelet[2727]: I0430 00:57:33.276066 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hostproc" (OuterVolumeSpecName: "hostproc") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.276325 kubelet[2727]: I0430 00:57:33.276095 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.276730 kubelet[2727]: I0430 00:57:33.276699 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:57:33.276821 kubelet[2727]: I0430 00:57:33.276752 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.276821 kubelet[2727]: I0430 00:57:33.276773 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 30 00:57:33.277535 kubelet[2727]: I0430 00:57:33.277503 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a2c9a16-9cd9-4236-a541-42db34bc5af5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 30 00:57:33.277946 kubelet[2727]: I0430 00:57:33.277911 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-kube-api-access-qf2x7" (OuterVolumeSpecName: "kube-api-access-qf2x7") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "kube-api-access-qf2x7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:57:33.280700 kubelet[2727]: I0430 00:57:33.280648 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9a2c9a16-9cd9-4236-a541-42db34bc5af5" (UID: "9a2c9a16-9cd9-4236-a541-42db34bc5af5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 30 00:57:33.281271 kubelet[2727]: I0430 00:57:33.281242 2727 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629d93b3-178c-402b-bad4-247be012ead9-kube-api-access-tnw8f" (OuterVolumeSpecName: "kube-api-access-tnw8f") pod "629d93b3-178c-402b-bad4-247be012ead9" (UID: "629d93b3-178c-402b-bad4-247be012ead9"). InnerVolumeSpecName "kube-api-access-tnw8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 30 00:57:33.365675 kubelet[2727]: I0430 00:57:33.365631 2727 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365675 kubelet[2727]: I0430 00:57:33.365666 2727 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365675 kubelet[2727]: I0430 00:57:33.365675 2727 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365711 2727 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365721 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-qf2x7\" (UniqueName: \"kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-kube-api-access-qf2x7\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365729 2727 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365736 2727 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365744 2727 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365751 2727 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365758 2727 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.365851 kubelet[2727]: I0430 00:57:33.365765 2727 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-tnw8f\" (UniqueName: \"kubernetes.io/projected/629d93b3-178c-402b-bad4-247be012ead9-kube-api-access-tnw8f\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.366020 kubelet[2727]: I0430 00:57:33.365773 2727 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.366020 kubelet[2727]: I0430 00:57:33.365780 2727 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.366020 kubelet[2727]: I0430 00:57:33.365787 2727 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/629d93b3-178c-402b-bad4-247be012ead9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.366020 kubelet[2727]: I0430 00:57:33.365796 2727 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9a2c9a16-9cd9-4236-a541-42db34bc5af5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.366020 kubelet[2727]: I0430 00:57:33.365803 2727 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9a2c9a16-9cd9-4236-a541-42db34bc5af5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 30 00:57:33.843881 kubelet[2727]: I0430 00:57:33.842894 2727 scope.go:117] "RemoveContainer" containerID="d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6" Apr 30 00:57:33.844180 containerd[1554]: time="2025-04-30T00:57:33.844140706Z" level=info msg="RemoveContainer for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\"" Apr 30 00:57:33.849224 containerd[1554]: time="2025-04-30T00:57:33.849160410Z" level=info msg="RemoveContainer for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" returns successfully" Apr 30 00:57:33.850267 kubelet[2727]: I0430 00:57:33.849522 2727 scope.go:117] "RemoveContainer" containerID="5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff" Apr 30 00:57:33.851161 containerd[1554]: time="2025-04-30T00:57:33.851133275Z" level=info msg="RemoveContainer for \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\"" Apr 30 00:57:33.854016 containerd[1554]: time="2025-04-30T00:57:33.853946031Z" level=info msg="RemoveContainer for \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\" returns successfully" Apr 30 00:57:33.854362 kubelet[2727]: I0430 00:57:33.854260 2727 scope.go:117] "RemoveContainer" containerID="2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5" Apr 30 00:57:33.856557 containerd[1554]: time="2025-04-30T00:57:33.856149379Z" level=info msg="RemoveContainer for \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\"" Apr 30 00:57:33.862534 containerd[1554]: time="2025-04-30T00:57:33.862487380Z" level=info msg="RemoveContainer for \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\" returns successfully" Apr 30 00:57:33.863458 kubelet[2727]: I0430 00:57:33.862841 2727 scope.go:117] "RemoveContainer" containerID="363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f" Apr 30 00:57:33.866585 containerd[1554]: time="2025-04-30T00:57:33.866536912Z" level=info msg="RemoveContainer for \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\"" Apr 30 00:57:33.873336 containerd[1554]: time="2025-04-30T00:57:33.873286039Z" level=info msg="RemoveContainer for \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\" returns successfully" Apr 30 00:57:33.874107 kubelet[2727]: I0430 00:57:33.873542 2727 scope.go:117] "RemoveContainer" containerID="65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a" Apr 30 00:57:33.876875 containerd[1554]: time="2025-04-30T00:57:33.876564241Z" level=info msg="RemoveContainer for \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\"" Apr 30 00:57:33.880685 containerd[1554]: time="2025-04-30T00:57:33.880645373Z" level=info msg="RemoveContainer for \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\" returns successfully" Apr 30 00:57:33.881145 kubelet[2727]: I0430 00:57:33.881054 2727 scope.go:117] "RemoveContainer" containerID="d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6" Apr 30 00:57:33.881375 containerd[1554]: time="2025-04-30T00:57:33.881325622Z" level=error msg="ContainerStatus for \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\": not found" Apr 30 00:57:33.892128 kubelet[2727]: E0430 00:57:33.892032 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\": not found" containerID="d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6" Apr 30 00:57:33.892128 kubelet[2727]: I0430 00:57:33.892092 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6"} err="failed to get container status \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9d980311950b5e340772f816db31d31ea36a2055ee2169c24d07a86a769eac6\": not found" Apr 30 00:57:33.892128 kubelet[2727]: I0430 00:57:33.892191 2727 scope.go:117] "RemoveContainer" containerID="5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff" Apr 30 00:57:33.893718 kubelet[2727]: E0430 00:57:33.892649 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\": not found" containerID="5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff" Apr 30 00:57:33.893718 kubelet[2727]: I0430 00:57:33.892676 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff"} err="failed to get container status \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\": not found" Apr 30 00:57:33.893718 kubelet[2727]: I0430 00:57:33.892700 2727 scope.go:117] "RemoveContainer" containerID="2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5" Apr 30 00:57:33.893718 kubelet[2727]: E0430 00:57:33.892956 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\": not found" containerID="2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5" Apr 30 00:57:33.893718 kubelet[2727]: I0430 00:57:33.892973 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5"} err="failed to get container status \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\": not found" Apr 30 00:57:33.893718 kubelet[2727]: I0430 00:57:33.892986 2727 scope.go:117] "RemoveContainer" containerID="363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f" Apr 30 00:57:33.893982 containerd[1554]: time="2025-04-30T00:57:33.892494925Z" level=error msg="ContainerStatus for \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5462ce13b2d33b4e41d53c624fd5f731f850b120ad0f20046c2429d19f2682ff\": not found" Apr 30 00:57:33.893982 containerd[1554]: time="2025-04-30T00:57:33.892865729Z" level=error msg="ContainerStatus for \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ab82b55631f96ac10b853a44001e8e87fb098937e031b8e75568dd5e33d17b5\": not found" Apr 30 00:57:33.893982 containerd[1554]: time="2025-04-30T00:57:33.893154813Z" level=error msg="ContainerStatus for \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\": not found" Apr 30 00:57:33.893982 containerd[1554]: time="2025-04-30T00:57:33.893525258Z" level=error msg="ContainerStatus for \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\": not found" Apr 30 00:57:33.894142 kubelet[2727]: E0430 00:57:33.893280 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\": not found" containerID="363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f" Apr 30 00:57:33.894142 kubelet[2727]: I0430 00:57:33.893299 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f"} err="failed to get container status \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\": rpc error: code = NotFound desc = an error occurred when try to find container \"363d3ce2d5715415a8614f537e7f2180668749378e1b8a33d1b1ac306368a37f\": not found" Apr 30 00:57:33.894142 kubelet[2727]: I0430 00:57:33.893312 2727 scope.go:117] "RemoveContainer" containerID="65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a" Apr 30 00:57:33.894142 kubelet[2727]: E0430 00:57:33.893667 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\": not found" containerID="65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a" Apr 30 00:57:33.894142 kubelet[2727]: I0430 00:57:33.893690 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a"} err="failed to get container status \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\": rpc error: code = NotFound desc = an error occurred when try to find container \"65802091415c5fed335a853adb54b3ca562d0959ae6146feda95456398bb696a\": not found" Apr 30 00:57:33.894142 kubelet[2727]: I0430 00:57:33.893707 2727 scope.go:117] "RemoveContainer" containerID="795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97" Apr 30 00:57:33.894743 containerd[1554]: time="2025-04-30T00:57:33.894719233Z" level=info msg="RemoveContainer for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\"" Apr 30 00:57:33.898681 containerd[1554]: time="2025-04-30T00:57:33.898632443Z" level=info msg="RemoveContainer for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" returns successfully" Apr 30 00:57:33.899018 kubelet[2727]: I0430 00:57:33.898872 2727 scope.go:117] "RemoveContainer" containerID="795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97" Apr 30 00:57:33.899161 containerd[1554]: time="2025-04-30T00:57:33.899112049Z" level=error msg="ContainerStatus for \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\": not found" Apr 30 00:57:33.899387 kubelet[2727]: E0430 00:57:33.899252 2727 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\": not found" containerID="795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97" Apr 30 00:57:33.899387 kubelet[2727]: I0430 00:57:33.899294 2727 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97"} err="failed to get container status \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\": rpc error: code = NotFound desc = an error occurred when try to find container \"795bd485cfc36c7b2a9ee47f3a186c099f2eef94574e8b9d387e315b4ad77e97\": not found" Apr 30 00:57:33.966897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-372fecb858390a4df390b3e3fb96ea37b14845424b6ed8c0e6a3557a57b9d0d6-rootfs.mount: Deactivated successfully. Apr 30 00:57:33.967045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41cc1ee52fa575c11fb58dedfd76b862b72a4ac4964c70be1a5bc557098cb26c-rootfs.mount: Deactivated successfully. Apr 30 00:57:33.967151 systemd[1]: var-lib-kubelet-pods-629d93b3\x2d178c\x2d402b\x2dbad4\x2d247be012ead9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtnw8f.mount: Deactivated successfully. Apr 30 00:57:33.967238 systemd[1]: var-lib-kubelet-pods-9a2c9a16\x2d9cd9\x2d4236\x2da541\x2d42db34bc5af5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqf2x7.mount: Deactivated successfully. Apr 30 00:57:33.967324 systemd[1]: var-lib-kubelet-pods-9a2c9a16\x2d9cd9\x2d4236\x2da541\x2d42db34bc5af5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 30 00:57:33.967411 systemd[1]: var-lib-kubelet-pods-9a2c9a16\x2d9cd9\x2d4236\x2da541\x2d42db34bc5af5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 30 00:57:34.640928 kubelet[2727]: I0430 00:57:34.640883 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629d93b3-178c-402b-bad4-247be012ead9" path="/var/lib/kubelet/pods/629d93b3-178c-402b-bad4-247be012ead9/volumes" Apr 30 00:57:34.641429 kubelet[2727]: I0430 00:57:34.641405 2727 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" path="/var/lib/kubelet/pods/9a2c9a16-9cd9-4236-a541-42db34bc5af5/volumes" Apr 30 00:57:34.641881 kubelet[2727]: E0430 00:57:34.641436 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:34.891190 sshd[4376]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:34.901733 systemd[1]: Started sshd@23-10.0.0.142:22-10.0.0.1:45576.service - OpenSSH per-connection server daemon (10.0.0.1:45576). Apr 30 00:57:34.902305 systemd[1]: sshd@22-10.0.0.142:22-10.0.0.1:48536.service: Deactivated successfully. Apr 30 00:57:34.905242 systemd-logind[1528]: Session 23 logged out. Waiting for processes to exit. Apr 30 00:57:34.906107 systemd[1]: session-23.scope: Deactivated successfully. Apr 30 00:57:34.907183 systemd-logind[1528]: Removed session 23. Apr 30 00:57:34.940553 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 45576 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:34.942013 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:34.947569 systemd-logind[1528]: New session 24 of user core. Apr 30 00:57:34.957736 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 30 00:57:35.864724 sshd[4543]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:35.874983 systemd[1]: Started sshd@24-10.0.0.142:22-10.0.0.1:45580.service - OpenSSH per-connection server daemon (10.0.0.1:45580). Apr 30 00:57:35.877231 systemd[1]: sshd@23-10.0.0.142:22-10.0.0.1:45576.service: Deactivated successfully. Apr 30 00:57:35.879175 kubelet[2727]: I0430 00:57:35.879131 2727 topology_manager.go:215] "Topology Admit Handler" podUID="94e10b96-8df1-4a96-9278-85a8296a1897" podNamespace="kube-system" podName="cilium-4sxbr" Apr 30 00:57:35.879578 kubelet[2727]: E0430 00:57:35.879194 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" containerName="cilium-agent" Apr 30 00:57:35.879578 kubelet[2727]: E0430 00:57:35.879205 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" containerName="mount-bpf-fs" Apr 30 00:57:35.879578 kubelet[2727]: E0430 00:57:35.879213 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" containerName="mount-cgroup" Apr 30 00:57:35.879578 kubelet[2727]: E0430 00:57:35.879218 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" containerName="apply-sysctl-overwrites" Apr 30 00:57:35.879578 kubelet[2727]: E0430 00:57:35.879223 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" containerName="clean-cilium-state" Apr 30 00:57:35.879578 kubelet[2727]: E0430 00:57:35.879229 2727 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="629d93b3-178c-402b-bad4-247be012ead9" containerName="cilium-operator" Apr 30 00:57:35.879578 kubelet[2727]: I0430 00:57:35.879251 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a2c9a16-9cd9-4236-a541-42db34bc5af5" containerName="cilium-agent" Apr 30 00:57:35.879578 kubelet[2727]: I0430 00:57:35.879258 2727 memory_manager.go:354] "RemoveStaleState removing state" podUID="629d93b3-178c-402b-bad4-247be012ead9" containerName="cilium-operator" Apr 30 00:57:35.899669 systemd[1]: session-24.scope: Deactivated successfully. Apr 30 00:57:35.906666 systemd-logind[1528]: Session 24 logged out. Waiting for processes to exit. Apr 30 00:57:35.913653 systemd-logind[1528]: Removed session 24. Apr 30 00:57:35.941823 sshd[4556]: Accepted publickey for core from 10.0.0.1 port 45580 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:35.943412 sshd[4556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:35.948097 systemd-logind[1528]: New session 25 of user core. Apr 30 00:57:35.958799 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 30 00:57:35.991586 kubelet[2727]: I0430 00:57:35.991532 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-cilium-cgroup\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991586 kubelet[2727]: I0430 00:57:35.991582 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-lib-modules\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991760 kubelet[2727]: I0430 00:57:35.991604 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-xtables-lock\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991760 kubelet[2727]: I0430 00:57:35.991624 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-etc-cni-netd\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991760 kubelet[2727]: I0430 00:57:35.991642 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-cni-path\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991760 kubelet[2727]: I0430 00:57:35.991659 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/94e10b96-8df1-4a96-9278-85a8296a1897-clustermesh-secrets\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991760 kubelet[2727]: I0430 00:57:35.991675 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/94e10b96-8df1-4a96-9278-85a8296a1897-cilium-ipsec-secrets\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991760 kubelet[2727]: I0430 00:57:35.991692 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szmc\" (UniqueName: \"kubernetes.io/projected/94e10b96-8df1-4a96-9278-85a8296a1897-kube-api-access-4szmc\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991897 kubelet[2727]: I0430 00:57:35.991709 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-bpf-maps\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991897 kubelet[2727]: I0430 00:57:35.991736 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-hostproc\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991897 kubelet[2727]: I0430 00:57:35.991753 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-host-proc-sys-kernel\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991897 kubelet[2727]: I0430 00:57:35.991770 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/94e10b96-8df1-4a96-9278-85a8296a1897-hubble-tls\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991897 kubelet[2727]: I0430 00:57:35.991803 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/94e10b96-8df1-4a96-9278-85a8296a1897-cilium-config-path\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.991897 kubelet[2727]: I0430 00:57:35.991838 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-host-proc-sys-net\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:35.992018 kubelet[2727]: I0430 00:57:35.991854 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/94e10b96-8df1-4a96-9278-85a8296a1897-cilium-run\") pod \"cilium-4sxbr\" (UID: \"94e10b96-8df1-4a96-9278-85a8296a1897\") " pod="kube-system/cilium-4sxbr" Apr 30 00:57:36.009680 sshd[4556]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:36.020742 systemd[1]: Started sshd@25-10.0.0.142:22-10.0.0.1:45582.service - OpenSSH per-connection server daemon (10.0.0.1:45582). Apr 30 00:57:36.021584 systemd[1]: sshd@24-10.0.0.142:22-10.0.0.1:45580.service: Deactivated successfully. Apr 30 00:57:36.023605 systemd[1]: session-25.scope: Deactivated successfully. Apr 30 00:57:36.025282 systemd-logind[1528]: Session 25 logged out. Waiting for processes to exit. Apr 30 00:57:36.027215 systemd-logind[1528]: Removed session 25. Apr 30 00:57:36.057378 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 45582 ssh2: RSA SHA256:OQmGrWkmfyTmroJqGUhs3duM8Iw7lLRvinb8RSQNd5Y Apr 30 00:57:36.058791 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 00:57:36.064684 systemd-logind[1528]: New session 26 of user core. Apr 30 00:57:36.077903 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 30 00:57:36.200579 kubelet[2727]: E0430 00:57:36.200529 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:36.202603 containerd[1554]: time="2025-04-30T00:57:36.201036014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sxbr,Uid:94e10b96-8df1-4a96-9278-85a8296a1897,Namespace:kube-system,Attempt:0,}" Apr 30 00:57:36.222372 containerd[1554]: time="2025-04-30T00:57:36.222270542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 00:57:36.222372 containerd[1554]: time="2025-04-30T00:57:36.222338383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 00:57:36.222372 containerd[1554]: time="2025-04-30T00:57:36.222355903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:57:36.222629 containerd[1554]: time="2025-04-30T00:57:36.222481984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 00:57:36.268414 containerd[1554]: time="2025-04-30T00:57:36.268307719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4sxbr,Uid:94e10b96-8df1-4a96-9278-85a8296a1897,Namespace:kube-system,Attempt:0,} returns sandbox id \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\"" Apr 30 00:57:36.269027 kubelet[2727]: E0430 00:57:36.269004 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:36.275271 containerd[1554]: time="2025-04-30T00:57:36.274047986Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 30 00:57:36.287436 containerd[1554]: time="2025-04-30T00:57:36.287376181Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a9fe4de12f88effd54e21b933ffb83abe2dd6247d9f14cd36aa7e89f50cd0e3\"" Apr 30 00:57:36.289064 containerd[1554]: time="2025-04-30T00:57:36.288699677Z" level=info msg="StartContainer for \"4a9fe4de12f88effd54e21b933ffb83abe2dd6247d9f14cd36aa7e89f50cd0e3\"" Apr 30 00:57:36.354962 containerd[1554]: time="2025-04-30T00:57:36.354915209Z" level=info msg="StartContainer for \"4a9fe4de12f88effd54e21b933ffb83abe2dd6247d9f14cd36aa7e89f50cd0e3\" returns successfully" Apr 30 00:57:36.410608 containerd[1554]: time="2025-04-30T00:57:36.410545178Z" level=info msg="shim disconnected" id=4a9fe4de12f88effd54e21b933ffb83abe2dd6247d9f14cd36aa7e89f50cd0e3 namespace=k8s.io Apr 30 00:57:36.411046 containerd[1554]: time="2025-04-30T00:57:36.410858501Z" level=warning msg="cleaning up after shim disconnected" id=4a9fe4de12f88effd54e21b933ffb83abe2dd6247d9f14cd36aa7e89f50cd0e3 namespace=k8s.io Apr 30 00:57:36.411046 containerd[1554]: time="2025-04-30T00:57:36.410875501Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:36.640998 kubelet[2727]: E0430 00:57:36.640870 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:36.698582 kubelet[2727]: E0430 00:57:36.698533 2727 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 30 00:57:36.856495 kubelet[2727]: E0430 00:57:36.856435 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:36.859856 containerd[1554]: time="2025-04-30T00:57:36.859719136Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 30 00:57:36.879123 containerd[1554]: time="2025-04-30T00:57:36.879057962Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0e7a16e78842d1e9d049375f405b920e61b63c7917e1326b14fd9fd8f233f437\"" Apr 30 00:57:36.881402 containerd[1554]: time="2025-04-30T00:57:36.879851251Z" level=info msg="StartContainer for \"0e7a16e78842d1e9d049375f405b920e61b63c7917e1326b14fd9fd8f233f437\"" Apr 30 00:57:36.942662 containerd[1554]: time="2025-04-30T00:57:36.942540622Z" level=info msg="StartContainer for \"0e7a16e78842d1e9d049375f405b920e61b63c7917e1326b14fd9fd8f233f437\" returns successfully" Apr 30 00:57:36.969997 containerd[1554]: time="2025-04-30T00:57:36.969859141Z" level=info msg="shim disconnected" id=0e7a16e78842d1e9d049375f405b920e61b63c7917e1326b14fd9fd8f233f437 namespace=k8s.io Apr 30 00:57:36.969997 containerd[1554]: time="2025-04-30T00:57:36.969913302Z" level=warning msg="cleaning up after shim disconnected" id=0e7a16e78842d1e9d049375f405b920e61b63c7917e1326b14fd9fd8f233f437 namespace=k8s.io Apr 30 00:57:36.969997 containerd[1554]: time="2025-04-30T00:57:36.969929302Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:37.859660 kubelet[2727]: E0430 00:57:37.859632 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:37.863852 containerd[1554]: time="2025-04-30T00:57:37.863459575Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 30 00:57:37.883186 containerd[1554]: time="2025-04-30T00:57:37.882980876Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5ba206e0ad549f33b1988c5f5fb62709861e171c9791345f7cfcfe5b54df7d71\"" Apr 30 00:57:37.885020 containerd[1554]: time="2025-04-30T00:57:37.883617563Z" level=info msg="StartContainer for \"5ba206e0ad549f33b1988c5f5fb62709861e171c9791345f7cfcfe5b54df7d71\"" Apr 30 00:57:37.939068 containerd[1554]: time="2025-04-30T00:57:37.939020869Z" level=info msg="StartContainer for \"5ba206e0ad549f33b1988c5f5fb62709861e171c9791345f7cfcfe5b54df7d71\" returns successfully" Apr 30 00:57:37.974018 containerd[1554]: time="2025-04-30T00:57:37.971314314Z" level=info msg="shim disconnected" id=5ba206e0ad549f33b1988c5f5fb62709861e171c9791345f7cfcfe5b54df7d71 namespace=k8s.io Apr 30 00:57:37.974018 containerd[1554]: time="2025-04-30T00:57:37.971369875Z" level=warning msg="cleaning up after shim disconnected" id=5ba206e0ad549f33b1988c5f5fb62709861e171c9791345f7cfcfe5b54df7d71 namespace=k8s.io Apr 30 00:57:37.974018 containerd[1554]: time="2025-04-30T00:57:37.971377955Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:38.097831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ba206e0ad549f33b1988c5f5fb62709861e171c9791345f7cfcfe5b54df7d71-rootfs.mount: Deactivated successfully. Apr 30 00:57:38.605132 kubelet[2727]: I0430 00:57:38.605067 2727 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-04-30T00:57:38Z","lastTransitionTime":"2025-04-30T00:57:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 30 00:57:38.864641 kubelet[2727]: E0430 00:57:38.863929 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:38.868153 containerd[1554]: time="2025-04-30T00:57:38.867406945Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 30 00:57:38.895038 containerd[1554]: time="2025-04-30T00:57:38.894963847Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c55460bf97d65343d3433df115b83a2ae05a67822770bc255aa54d4e6261339b\"" Apr 30 00:57:38.895915 containerd[1554]: time="2025-04-30T00:57:38.895520053Z" level=info msg="StartContainer for \"c55460bf97d65343d3433df115b83a2ae05a67822770bc255aa54d4e6261339b\"" Apr 30 00:57:38.960879 containerd[1554]: time="2025-04-30T00:57:38.960840289Z" level=info msg="StartContainer for \"c55460bf97d65343d3433df115b83a2ae05a67822770bc255aa54d4e6261339b\" returns successfully" Apr 30 00:57:38.991948 containerd[1554]: time="2025-04-30T00:57:38.991885029Z" level=info msg="shim disconnected" id=c55460bf97d65343d3433df115b83a2ae05a67822770bc255aa54d4e6261339b namespace=k8s.io Apr 30 00:57:38.991948 containerd[1554]: time="2025-04-30T00:57:38.991944510Z" level=warning msg="cleaning up after shim disconnected" id=c55460bf97d65343d3433df115b83a2ae05a67822770bc255aa54d4e6261339b namespace=k8s.io Apr 30 00:57:38.991948 containerd[1554]: time="2025-04-30T00:57:38.991954190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 00:57:39.097925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c55460bf97d65343d3433df115b83a2ae05a67822770bc255aa54d4e6261339b-rootfs.mount: Deactivated successfully. Apr 30 00:57:39.870872 kubelet[2727]: E0430 00:57:39.870760 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:39.873471 containerd[1554]: time="2025-04-30T00:57:39.873399678Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 30 00:57:39.911541 containerd[1554]: time="2025-04-30T00:57:39.898054940Z" level=info msg="CreateContainer within sandbox \"75cca689f658b30e175303dfd2a9b583014650b8d7c97c9a454cd7fecad10f42\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2bcbc86b81b97bcb4d9564674ffdfae7b4865b41a967dadef69f3259fac7b2e3\"" Apr 30 00:57:39.911541 containerd[1554]: time="2025-04-30T00:57:39.900636167Z" level=info msg="StartContainer for \"2bcbc86b81b97bcb4d9564674ffdfae7b4865b41a967dadef69f3259fac7b2e3\"" Apr 30 00:57:39.987218 containerd[1554]: time="2025-04-30T00:57:39.986991885Z" level=info msg="StartContainer for \"2bcbc86b81b97bcb4d9564674ffdfae7b4865b41a967dadef69f3259fac7b2e3\" returns successfully" Apr 30 00:57:40.280467 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Apr 30 00:57:40.876232 kubelet[2727]: E0430 00:57:40.876184 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:42.202119 kubelet[2727]: E0430 00:57:42.202068 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:43.217534 systemd-networkd[1238]: lxc_health: Link UP Apr 30 00:57:43.226876 systemd-networkd[1238]: lxc_health: Gained carrier Apr 30 00:57:44.204962 kubelet[2727]: E0430 00:57:44.204667 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:44.239455 kubelet[2727]: I0430 00:57:44.239345 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4sxbr" podStartSLOduration=9.23932867 podStartE2EDuration="9.23932867s" podCreationTimestamp="2025-04-30 00:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-30 00:57:40.893431545 +0000 UTC m=+84.360295279" watchObservedRunningTime="2025-04-30 00:57:44.23932867 +0000 UTC m=+87.706192404" Apr 30 00:57:44.470636 systemd-networkd[1238]: lxc_health: Gained IPv6LL Apr 30 00:57:44.887062 kubelet[2727]: E0430 00:57:44.886940 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:45.888949 kubelet[2727]: E0430 00:57:45.888676 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:46.764479 kubelet[2727]: E0430 00:57:46.763480 2727 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44950->127.0.0.1:45189: write tcp 127.0.0.1:44950->127.0.0.1:45189: write: broken pipe Apr 30 00:57:47.638920 kubelet[2727]: E0430 00:57:47.638869 2727 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 30 00:57:48.879668 sshd[4566]: pam_unix(sshd:session): session closed for user core Apr 30 00:57:48.883052 systemd[1]: sshd@25-10.0.0.142:22-10.0.0.1:45582.service: Deactivated successfully. Apr 30 00:57:48.885494 systemd[1]: session-26.scope: Deactivated successfully. Apr 30 00:57:48.885528 systemd-logind[1528]: Session 26 logged out. Waiting for processes to exit. Apr 30 00:57:48.886725 systemd-logind[1528]: Removed session 26.